论文标题

ALA:自然意识的对抗轻度攻击

ALA: Naturalness-aware Adversarial Lightness Attack

论文作者

Huang, Yihao, Sun, Liangru, Guo, Qing, Juefei-Xu, Felix, Zhu, Jiayi, Feng, Jincao, Liu, Yang, Pu, Geguang

论文摘要

大多数研究人员试图通过揭示和修复DNN的脆弱性来增强DNN的鲁棒性。攻击示例的一部分受到LP规范限制的不可察觉的扰动。但是,由于其高频性能,可以通过降解方法来捍卫对抗性例子,并且在物理世界中很难实现。为了避免缺陷,一些作品提出了不受限制的攻击,以获得更好的鲁棒性和实用性。令人失望的是,这些例子通常看起来不自然,并且可以提醒警卫。在本文中,我们提出了对抗轻度攻击(ALA),这是一种无限制的对抗攻击,重点是修改图像的轻度。对人类感知至关重要的样品的形状和颜色几乎没有影响。为了获得具有很高攻击成功率的对抗性示例,我们就图像中的光线和阴影关系提出了不受限制的增强。为了增强图像的自然性,我们根据光的范围和分布来制作自然意识的正则化。在两个流行的数据集上验证了ALA的有效性,以实现不同的任务(即用于图像分类的ImageNet和365个场景识别的位置-365)。

Most researchers have tried to enhance the robustness of DNNs by revealing and repairing the vulnerability of DNNs with specialized adversarial examples. Parts of the attack examples have imperceptible perturbations restricted by Lp norm. However, due to their high-frequency property, the adversarial examples can be defended by denoising methods and are hard to realize in the physical world. To avoid the defects, some works have proposed unrestricted attacks to gain better robustness and practicality. It is disappointing that these examples usually look unnatural and can alert the guards. In this paper, we propose Adversarial Lightness Attack (ALA), a white-box unrestricted adversarial attack that focuses on modifying the lightness of the images. The shape and color of the samples, which are crucial to human perception, are barely influenced. To obtain adversarial examples with a high attack success rate, we propose unconstrained enhancement in terms of the light and shade relationship in images. To enhance the naturalness of images, we craft the naturalness-aware regularization according to the range and distribution of light. The effectiveness of ALA is verified on two popular datasets for different tasks (i.e., ImageNet for image classification and Places-365 for scene recognition).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源