论文标题
Deephammer:通过靶向链链耗尽深神经网络的智能
DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips
论文作者
论文摘要
由于许多安全敏感领域中深度学习的无处不在,机器学习的安全越来越成为主要问题。许多先前的研究表明,外部攻击,例如使用恶意精心制作的输入来篡改DNN的完整性的对抗例子。但是,内部威胁(即硬件脆弱性)对DNN模型的安全含义尚未得到充分理解。在本文中,我们演示了对量化深度神经网络 - deephammer的首次基于硬件的攻击 - 确定性地诱导模型权重的位翻转,以通过利用Rowhammer脆弱性来损害DNN推断。 Deephammer在DNN模型中执行了积极的位搜索,以识别系统约束下最脆弱的重量位。为了在合理的时间内触发跨多个页面上的确定位翻转,我们开发了新颖的系统级技术,从而可以快速部署受害者页面,记忆效率高效的行hammmering和对目标位的精确翻转。 Deephammer可以将受害者DNN系统的推理准确性降低到与随机猜测一样好的水平,从而完全消耗了目标DNN系统的智能。我们系统地展示了对具有4个不同数据集和不同应用程序域的12个DNN体系结构对实际系统的攻击。我们的评估表明,DeepHammer能够在几分钟内成功篡改DNN推理行为。我们进一步讨论了算法和系统级别的几种缓解技术,以保护DNN免受此类攻击。我们的工作强调了将安全机制纳入未来深度学习系统的必要性,以增强DNN对基于硬件的确定性故障注射的鲁棒性。
Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains. Many prior studies have shown external attacks such as adversarial examples that tamper with the integrity of DNNs using maliciously crafted inputs. However, the security implication of internal threats (i.e., hardware vulnerability) to DNN models has not yet been well understood. In this paper, we demonstrate the first hardware-based attack on quantized deep neural networks-DeepHammer-that deterministically induces bit flips in model weights to compromise DNN inference by exploiting the rowhammer vulnerability. DeepHammer performs aggressive bit search in the DNN model to identify the most vulnerable weight bits that are flippable under system constraints. To trigger deterministic bit flips across multiple pages within reasonable amount of time, we develop novel system-level techniques that enable fast deployment of victim pages, memory-efficient rowhammering and precise flipping of targeted bits. DeepHammer can deliberately degrade the inference accuracy of the victim DNN system to a level that is only as good as random guess, thus completely depleting the intelligence of targeted DNN systems. We systematically demonstrate our attacks on real systems against 12 DNN architectures with 4 different datasets and different application domains. Our evaluation shows that DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes. We further discuss several mitigation techniques from both algorithm and system levels to protect DNNs against such attacks. Our work highlights the need to incorporate security mechanisms in future deep learning system to enhance the robustness of DNN against hardware-based deterministic fault injections.