论文标题

在自主驾驶场景中对YOLO探测器的对抗性攻击和防御

Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios

论文作者

Choi, Jung Im, Tian, Qing

论文摘要

视觉检测是自动驾驶中的关键任务,它是自动驾驶计划和控制的关键基础。深度神经网络在各种视觉任务中取得了令人鼓舞的结果,但已知它们容易受到对抗性攻击的影响。需要对深层视觉探测器的脆弱性进行全面的理解,然后人们才能改善其稳健性。但是,只有少数对抗性攻击/防御工程集中在对象检测上,其中大多数仅采用分类和/或本地化损失,而忽略了目的方面。在本文中,我们在Yolo检测器中确定了严重的与物体有关的对抗脆弱性,并提出了针对自动驾驶汽车视觉检测物质方面的有效攻击策略。此外,为了解决这种脆弱性,我们提出了一种新的客观性训练方法,以进行视觉检测。实验表明,针对目标方面的拟议攻击比分类和/或可可交通数据集中的分类和/或本地化损失产生的攻击效率高45.17%和43.50%。同样,拟议的对抗防御方法可以分别在Kitti和Coco流量上提高检测器对目标攻击的鲁棒性高达21%和12%的地图。

Visual detection is a key task in autonomous driving, and it serves as a crucial foundation for self-driving planning and control. Deep neural networks have achieved promising results in various visual tasks, but they are known to be vulnerable to adversarial attacks. A comprehensive understanding of deep visual detectors' vulnerability is required before people can improve their robustness. However, only a few adversarial attack/defense works have focused on object detection, and most of them employed only classification and/or localization losses, ignoring the objectness aspect. In this paper, we identify a serious objectness-related adversarial vulnerability in YOLO detectors and present an effective attack strategy targeting the objectness aspect of visual detection in autonomous vehicles. Furthermore, to address such vulnerability, we propose a new objectness-aware adversarial training approach for visual detection. Experiments show that the proposed attack targeting the objectness aspect is 45.17% and 43.50% more effective than those generated from classification and/or localization losses on the KITTI and COCO traffic datasets, respectively. Also, the proposed adversarial defense approach can improve the detectors' robustness against objectness-oriented attacks by up to 21% and 12% mAP on KITTI and COCO traffic, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源