论文标题
脸庞:轻巧的低分辨率面部性别分类方法
FaceHop: A Light-Weight Low-Resolution Face Gender Classification Method
论文作者
论文摘要
在这项研究中提出了一种轻巧的低分辨率面部性别分类方法,称为facehop。由于采用了深度学习(DL)技术,我们目睹了面部性别分类准确性的快速进步。但是,基于DL的系统不适用于具有有限的网络和计算的资源受限环境。 Facehop提供了一种可解释的非参数机器学习解决方案。它具有较小的模型大小,较小的训练数据量,低训练复杂性和低分辨率输入图像等特征。 Facehop是通过连续的子空间学习(SSL)原理开发的,并建立在Pixelhop ++的基础上。 Facehop方法的有效性通过实验证明。对于LFW和CMU Multi-Pie数据集的灰度面部分辨率$ 32 \ times 32 $,Facehop分别达到94.63%和95.12%的正确性别分类率,型号的尺寸分别为16.9k和17.6k参数。在LENET-5的模型大小为75.8K参数时,它在分类精度上的表现优于LENET-5。
A light-weight low-resolution face gender classification method, called FaceHop, is proposed in this research. We have witnessed rapid progress in face gender classification accuracy due to the adoption of deep learning (DL) technology. Yet, DL-based systems are not suitable for resource-constrained environments with limited networking and computing. FaceHop offers an interpretable non-parametric machine learning solution. It has desired characteristics such as a small model size, a small training data amount, low training complexity, and low-resolution input images. FaceHop is developed with the successive subspace learning (SSL) principle and built upon the foundation of PixelHop++. The effectiveness of the FaceHop method is demonstrated by experiments. For gray-scale face images of resolution $32 \times 32$ in the LFW and the CMU Multi-PIE datasets, FaceHop achieves correct gender classification rates of 94.63% and 95.12% with model sizes of 16.9K and 17.6K parameters, respectively. It outperforms LeNet-5 in classification accuracy while LeNet-5 has a model size of 75.8K parameters.