论文标题

通过视觉和生理提示进行基准测试联合面孔欺骗和伪造检测

Benchmarking Joint Face Spoofing and Forgery Detection with Visual and Physiological Cues

论文作者

Yu, Zitong, Cai, Rizhao, Li, Zhi, Yang, Wenhan, Shi, Jingang, Kot, Alex C.

论文摘要

面对抗烟雾(FAS)和伪造探测在保护面部生物识别系统免受演示攻击(PAS)和恶性数字操作(例如,Deepfakes)中的生物识别系统中起着至关重要的作用。尽管在大规模数据和强大的深层模型上表现出色,但现有方法的概括问题仍然是一个空旷的问题。最近的大多数方法都集中在1)单峰视觉外观或生理学(即远程光摄影学(RPPG))线索; 2)用于FAS或面部伪造检测的分离特征表示。一方面,单峰外观和RPPG功能分别容易受到高保真的面孔3D掩码和视频重播攻击的影响,从而激发了我们设计可靠的多模式融合机制,用于广义面部攻击检​​测。另一方面,FAS和面部伪造探测任务(例如,定期的RPPG节奏和BONAFIDE的香草外观)具有丰富的共同特征,提供了可靠的证据来设计以多任务学习方式设计联合FAS和面部伪造探测系统。在本文中,我们使用视觉外观和生理RPPG提示建立了第一个关节面欺骗和伪造的检测基准。为了增强RPPG周期性歧视,我们使用两种面部时空时空的RPPG信号图及其连续小波转换为输入的两分支生理网络。为了减轻模态偏差并提高融合功效,我们在多模式融合之前对外观和RPPG特征进行了加权批次和层归一化。我们发现,可以通过对这两个任务的联合培训来提高单峰(外观或RPPG)和多模式(外观+RPPG)模型的概括能力。我们希望这个新的基准将促进FAS和DeepFake检测社区的未来研究。

Face anti-spoofing (FAS) and face forgery detection play vital roles in securing face biometric systems from presentation attacks (PAs) and vicious digital manipulation (e.g., deepfakes). Despite promising performance upon large-scale data and powerful deep models, the generalization problem of existing approaches is still an open issue. Most of recent approaches focus on 1) unimodal visual appearance or physiological (i.e., remote photoplethysmography (rPPG)) cues; and 2) separated feature representation for FAS or face forgery detection. On one side, unimodal appearance and rPPG features are respectively vulnerable to high-fidelity face 3D mask and video replay attacks, inspiring us to design reliable multi-modal fusion mechanisms for generalized face attack detection. On the other side, there are rich common features across FAS and face forgery detection tasks (e.g., periodic rPPG rhythms and vanilla appearance for bonafides), providing solid evidence to design a joint FAS and face forgery detection system in a multi-task learning fashion. In this paper, we establish the first joint face spoofing and forgery detection benchmark using both visual appearance and physiological rPPG cues. To enhance the rPPG periodicity discrimination, we design a two-branch physiological network using both facial spatio-temporal rPPG signal map and its continuous wavelet transformed counterpart as inputs. To mitigate the modality bias and improve the fusion efficacy, we conduct a weighted batch and layer normalization for both appearance and rPPG features before multi-modal fusion. We find that the generalization capacities of both unimodal (appearance or rPPG) and multi-modal (appearance+rPPG) models can be obviously improved via joint training on these two tasks. We hope this new benchmark will facilitate the future research of both FAS and deepfake detection communities.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源