论文标题

adv-atribute:面部识别的不显眼和可转移的对抗性攻击

Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition

论文作者

Jia, Shuai, Yin, Bangjie, Yao, Taiping, Ding, Shouhong, Shen, Chunhua, Yang, Xiaokang, Ma, Chao

论文摘要

深度学习模型在处理对抗性攻击时表明了它们的脆弱性。现有攻击几乎在低级实例(例如像素和超级像素)上执行,很少利用语义线索。对于面部识别攻击,现有方法通常会在像素上生成L_P-Norm扰动,但是导致低攻击可传递性和高脆弱性可降低防御模型。在这项工作中,我们建议通过对高级语义的扰动来产生攻击,而不是对低级像素进行扰动,以提高攻击可传递性。具体而言,统一的灵活框架,对抗属性(Adv-Attribute)旨在在面部识别上产生不明显且可转移的攻击,该攻击会根据面部识别特征的差异差异,从而制作了对抗性噪声并将其添加到不同的属性中。此外,引入了重要的意识性属性选择和多目标优化策略,以进一步确保隐形和攻击力量的平衡。 FFHQ和Celeba-HQ数据集的广泛实验表明,所提出的Adv-Attribute方法实现了最新的攻击成功率,同时保持了针对最近的攻击方法更好的视觉效果。

Deep learning models have shown their vulnerability when dealing with adversarial attacks. Existing attacks almost perform on low-level instances, such as pixels and super-pixels, and rarely exploit semantic clues. For face recognition attacks, existing methods typically generate the l_p-norm perturbations on pixels, however, resulting in low attack transferability and high vulnerability to denoising defense models. In this work, instead of performing perturbations on the low-level pixels, we propose to generate attacks through perturbing on the high-level semantics to improve attack transferability. Specifically, a unified flexible framework, Adversarial Attributes (Adv-Attribute), is designed to generate inconspicuous and transferable attacks on face recognition, which crafts the adversarial noise and adds it into different attributes based on the guidance of the difference in face recognition features from the target. Moreover, the importance-aware attribute selection and the multi-objective optimization strategy are introduced to further ensure the balance of stealthiness and attacking strength. Extensive experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates while maintaining better visual effects against recent attack methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源