论文标题

物理世界对3D面部识别的光学对抗攻击

Physical-World Optical Adversarial Attacks on 3D Face Recognition

论文作者

Li, Yanjie, Li, Yiquan, Dai, Xuelong, Guo, Songtao, Xiao, Bin

论文摘要

2D面部识别已被证明是身体对抗攻击的不安全。但是,很少有研究研究了攻击现实世界3D面部识别系统的可能性。最近提出的3D打印攻击无法在空中产生对抗点。在本文中,我们通过精心设计的光学噪声来攻击3D面部识别系统。我们将结构化的3D扫描仪作为我们的攻击目标。端到端攻击算法旨在通过固有的或附加的投影仪为3D面生成对抗性照明,以在任意位置产生对抗点。然而,面部反射率是一个复杂的程序,因为皮肤是半透明的。为了将这种投影和捕获程序参与优化回路,我们通过兰伯特渲染模型对其进行建模,并使用SFSNet估计反照率。此外,为了提高对距离和角度变化的阻力,同时介绍了3D变换不变损失和两种灵敏度图。实验均在模拟世界和物理世界中进行。我们成功地攻击了基于点云的基于点云和基于深度图像的3D面部识别算法,同时需要比以前的最新物理世界3D对抗攻击所需的扰动。

2D face recognition has been proven insecure for physical adversarial attacks. However, few studies have investigated the possibility of attacking real-world 3D face recognition systems. 3D-printed attacks recently proposed cannot generate adversarial points in the air. In this paper, we attack 3D face recognition systems through elaborate optical noises. We took structured light 3D scanners as our attack target. End-to-end attack algorithms are designed to generate adversarial illumination for 3D faces through the inherent or an additional projector to produce adversarial points at arbitrary positions. Nevertheless, face reflectance is a complex procedure because the skin is translucent. To involve this projection-and-capture procedure in optimization loops, we model it by Lambertian rendering model and use SfSNet to estimate the albedo. Moreover, to improve the resistance to distance and angle changes while maintaining the perturbation unnoticeable, a 3D transform invariant loss and two kinds of sensitivity maps are introduced. Experiments are conducted in both simulated and physical worlds. We successfully attacked point-cloud-based and depth-image-based 3D face recognition algorithms while needing fewer perturbations than previous state-of-the-art physical-world 3D adversarial attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源