论文标题
对抗性霓虹灯:对DNN的轻度物理攻击
Adversarial Neon Beam: A Light-based Physical Attack to DNNs
论文作者
论文摘要
在物理世界中,深度神经网络(DNN)受光和阴影的影响,这可能会对它们的性能产生重大影响。虽然传统上在大多数物理攻击中都将贴纸用作扰动,但通常可以轻松地检测到它们的扰动。为了解决这个问题,一些研究探索了基于光的扰动(例如激光器或投影仪)的使用,以产生更微妙的扰动,这些扰动是人为而不是自然的。在这项研究中,我们引入了一种新型的基于光的攻击,称为“对抗霓虹灯光束(Advnb)”,该攻击利用常见的霓虹灯束创建天然的黑盒物理攻击。根据三个关键标准评估我们的方法:有效性,隐身性和鲁棒性。在模拟环境中获得的定量结果证明了所提出的方法的有效性,在物理情况下,我们的攻击成功率为81.82%,超过了基线。通过使用常见的霓虹灯作为扰动,我们可以增强拟议攻击的隐身性,从而使物理样品看起来更自然。此外,我们通过在所有情况下成功攻击超过75%的成功率,通过成功攻击高级DNN来验证方法的鲁棒性。我们还讨论了针对Advnb攻击的防御策略,并提出了其他基于轻的物理攻击。
In the physical world, deep neural networks (DNNs) are impacted by light and shadow, which can have a significant effect on their performance. While stickers have traditionally been used as perturbations in most physical attacks, their perturbations can often be easily detected. To address this, some studies have explored the use of light-based perturbations, such as lasers or projectors, to generate more subtle perturbations, which are artificial rather than natural. In this study, we introduce a novel light-based attack called the adversarial neon beam (AdvNB), which utilizes common neon beams to create a natural black-box physical attack. Our approach is evaluated on three key criteria: effectiveness, stealthiness, and robustness. Quantitative results obtained in simulated environments demonstrate the effectiveness of the proposed method, and in physical scenarios, we achieve an attack success rate of 81.82%, surpassing the baseline. By using common neon beams as perturbations, we enhance the stealthiness of the proposed attack, enabling physical samples to appear more natural. Moreover, we validate the robustness of our approach by successfully attacking advanced DNNs with a success rate of over 75% in all cases. We also discuss defense strategies against the AdvNB attack and put forward other light-based physical attacks.