论文标题
Aniwho:一种对动漫角色面对图像的快速准确的方法
AniWho : A Quick and Accurate Way to Classify Anime Character Faces in Images
论文作者
论文摘要
为了对日本动画风格的角色面孔进行分类,本文试图进一步研究当前可用的许多模型,包括InceptionV3,InceptionResnetv2,MobilenetV2和EdgitionNet,采用转移学习。本文表明,高度1的高度准确度的高度准确率具有最高的精度率。 Mobilenetv2以81.92%的前1个精度获得了较不准确的结果,因此从明显更快的推理时间和更少的必需参数中受益。但是,从实验中,Mobilenet-V2容易过度拟合。 EfficiEnnet-B0解决了过度拟合问题,但推理时间的成本比Mobilenet-V2慢一点,但结果略高一些,TOP-1精度为83.46%。本文还使用了一些称为典型网络的学习架构,该体系结构为传统转移学习技术提供了足够的替代品。
In order to classify Japanese animation-style character faces, this paper attempts to delve further into the many models currently available, including InceptionV3, InceptionResNetV2, MobileNetV2, and EfficientNet, employing transfer learning. This paper demonstrates that EfficientNet-B7, which achieves a top-1 accuracy of 85.08%, has the highest accuracy rate. MobileNetV2, which achieves a less accurate result with a top-1 accuracy of 81.92%, benefits from a significantly faster inference time and fewer required parameters. However, from the experiment, MobileNet-V2 is prone to overfitting; EfficienNet-B0 fixed the overfitting issue but with a cost of a little slower in inference time than MobileNet-V2 but a little more accurate result, top-1 accuracy of 83.46%. This paper also uses a few-shot learning architecture called Prototypical Networks, which offers an adequate substitute for conventional transfer learning techniques.