论文标题
对抗性变焦镜头:一种新颖的物理世界攻击DNNS
Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs
论文作者
论文摘要
尽管已知深层神经网络(DNN)很脆弱,但没有人研究了物理世界中图像对DNNS性能的缩放和缩放的影响。在本文中,我们演示了一种新型的物理对抗攻击技术,称为“对抗变焦镜头(Advzl)”,该技术使用变焦镜头放大了物理世界的图片,欺骗了DNNS而不改变目标对象的特征。迄今为止,提出的方法是唯一不添加物理对抗扰动攻击DNN的对抗性攻击技术。在数字环境中,我们构建了一个基于Advzl的数据集,以验证同等规模扩大图像对DNN的拮抗作用。在物理环境中,我们操纵变焦镜头以放大目标对象,并生成对抗样本。实验结果证明了Advzl在数字和物理环境中的有效性。我们进一步分析了提出的数据集与改进的DNN的拮抗作用。另一方面,我们通过对抗训练提供了针对Advzl的防御指南。最后,我们研究了提议的未来自动驾驶和变体攻击思想的威胁可能性,类似于拟议的攻击。
Although deep neural networks (DNNs) are known to be fragile, no one has studied the effects of zooming-in and zooming-out of images in the physical world on DNNs performance. In this paper, we demonstrate a novel physical adversarial attack technique called Adversarial Zoom Lens (AdvZL), which uses a zoom lens to zoom in and out of pictures of the physical world, fooling DNNs without changing the characteristics of the target object. The proposed method is so far the only adversarial attack technique that does not add physical adversarial perturbation attack DNNs. In a digital environment, we construct a data set based on AdvZL to verify the antagonism of equal-scale enlarged images to DNNs. In the physical environment, we manipulate the zoom lens to zoom in and out of the target object, and generate adversarial samples. The experimental results demonstrate the effectiveness of AdvZL in both digital and physical environments. We further analyze the antagonism of the proposed data set to the improved DNNs. On the other hand, we provide a guideline for defense against AdvZL by means of adversarial training. Finally, we look into the threat possibilities of the proposed approach to future autonomous driving and variant attack ideas similar to the proposed attack.