论文标题

面部抗腐蚀性的生成域适应

Generative Domain Adaptation for Face Anti-Spoofing

论文作者

Zhou, Qianyu, Zhang, Ke-Yue, Yao, Taiping, Yi, Ran, Sheng, Kekai, Ding, Shouhong, Ma, Lizhuang

论文摘要

基于无监督的域适应性(UDA),由于目标情景的表现有希望的表现,面部反欺骗(FAS)方法引起了人们的注意。大多数现有的UDA FAS方法通常通过对齐语义高级特征的分布来拟合受过训练的模型。但是,对未标记的目标域的监督不足,低级特征对齐降低了现有方法的性能。为了解决这些问题,我们提出了UDA FAS的新颖观点,该视角将目标数据直接适合模型,即,通过图像翻译将目标数据定为源域样式,并将风格化的数据进一步馈送到训练有素的分类源模型中。提出的生成域适应(GDA)框架结合了两个精心设计的一致性约束:1)域间神经统计量一致性指导发生器缩小域间间隙。 2)双层语义一致性确保了风格化图像的语义质量。此外,我们提出了域内频谱混合物,以进一步扩大目标数据分布,以确保概括并减少内域间隙。广泛的实验和可视化证明了我们方法对最新方法的有效性。

Face anti-spoofing (FAS) approaches based on unsupervised domain adaption (UDA) have drawn growing attention due to promising performances for target scenarios. Most existing UDA FAS methods typically fit the trained models to the target domain via aligning the distribution of semantic high-level features. However, insufficient supervision of unlabeled target domains and neglect of low-level feature alignment degrade the performances of existing methods. To address these issues, we propose a novel perspective of UDA FAS that directly fits the target data to the models, i.e., stylizes the target data to the source-domain style via image translation, and further feeds the stylized data into the well-trained source model for classification. The proposed Generative Domain Adaptation (GDA) framework combines two carefully designed consistency constraints: 1) Inter-domain neural statistic consistency guides the generator in narrowing the inter-domain gap. 2) Dual-level semantic consistency ensures the semantic quality of stylized images. Besides, we propose intra-domain spectrum mixup to further expand target data distributions to ensure generalization and reduce the intra-domain gap. Extensive experiments and visualizations demonstrate the effectiveness of our method against the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源