论文标题

关于实时语义分割模型的现实世界对抗性鲁棒性,用于自动驾驶

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

论文作者

Rossolini, Giulio, Nesti, Federico, D'Amico, Gianluca, Nair, Saasha, Biondi, Alessandro, Buttazzo, Giorgio

论文摘要

现实世界中的对抗性示例的存在(通常是以补丁的形式)对在安全至关重要的计算机视觉任务中使用深度学习模型(例如自主驾驶中的视觉感知)构成了严重威胁。本文用不同类型的对抗贴片(包括数字,模拟和物理)进行攻击时,对语义分割模型的鲁棒性进行了广泛的评估。提出了一种新的损失函数,以提高攻击者在诱导像素错误分类中的能力。同样,提出了一种新颖的攻击策略,以提高对将补丁放置在现场的转换方法的期望。最后,一种用于检测对抗斑块的最先进方法首先扩展到应对语义分割模型,然后改进以获得实时性能,并最终在现实世界中进行了评估。实验结果表明,即使在数字和现实世界攻击中都可以看到对抗性效应,但其影响通常在空间上局限于贴片周围图像区域。这开启了有关实时语义分割模型的空间鲁棒性的进一步质疑。

The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks such as visual perception in autonomous driving. This paper presents an extensive evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches, including digital, simulated, and physical ones. A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels. Also, a novel attack strategy is presented to improve the Expectation Over Transformation method for placing a patch in the scene. Finally, a state-of-the-art method for detecting adversarial patch is first extended to cope with semantic segmentation models, then improved to obtain real-time performance, and eventually evaluated in real-world scenarios. Experimental results reveal that, even though the adversarial effect is visible with both digital and real-world attacks, its impact is often spatially confined to areas of the image around the patch. This opens to further questions about the spatial robustness of real-time semantic segmentation models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源