论文标题

FusiformNet:提取不同级别的区分面特征

FusiformNet: Extracting Discriminative Facial Features on Different Levels

论文作者

Takano, Kyo

论文摘要

在过去的几年中,基于深层神经网络的面部识别的研究已经随着任务特定的损失功能,图像归一化和增强,网络体系结构等方法的发展而发展。但是,很少有方法可以关注人的面孔在人之间的差异。我提出了fusiformnet,这是一个普遍和本地的人际关系差异,这是一种用于提取特征提取的新型框架,它利用了判别性面部特征的性质。在野生基准测试的标记面的图像无限制设置上测试,该方法达到了96.67%的最新精度,没有标记的外部数据,图像增强,标准化或特殊损失函数。同样,该方法在Casia-Webface数据集中进行了预训练时,在先前的最先进的方面也可以执行该方法。考虑到其提取一般和局部面部特征的能力,FusiformNet的效用可能不仅限于面部识别,还可以扩展到其他基于DNN的任务。

Over the last several years, research on facial recognition based on Deep Neural Network has evolved with approaches like task-specific loss functions, image normalization and augmentation, network architectures, etc. However, there have been few approaches with attention to how human faces differ from person to person. Premising that inter-personal differences are found both generally and locally on the human face, I propose FusiformNet, a novel framework for feature extraction that leverages the nature of discriminative facial features. Tested on Image-Unrestricted setting of Labeled Faces in the Wild benchmark, this method achieved a state-of-the-art accuracy of 96.67% without labeled outside data, image augmentation, normalization, or special loss functions. Likewise, the method also performed on a par with previous state-of-the-arts when pre-trained on CASIA-WebFace dataset. Considering its ability to extract both general and local facial features, the utility of FusiformNet may not be limited to facial recognition but also extend to other DNN-based tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源