论文标题

CAMRI损失:改善对特定类别的回忆而不牺牲准确性

CAMRI Loss: Improving Recall of a Specific Class without Sacrificing Accuracy

论文作者

Nishiyama, Daiki, Fukuchi, Kazuto, Akimoto, Youhei, Sakuma, Jun

论文摘要

在多类分类模型的实际应用中,重要类中的错误分类(例如停止符号)可能比其他类别(例如速度限制)更有危害。在本文中,我们提出了一个损失函数,可以改善重要类别的回忆,同时使用跨透明拷贝损失保持与情况相同的准确性。出于我们的目的,我们需要比其他班级更好地分离重要班级。但是,现有的方法对跨凝性损失造成较敏感的惩罚并不能改善分离。另一方面,使特征向量与与每个特征相对应的最后一个完全连接层的重量向量之间的角度的方法可以改善分离。因此,我们提出了一个损失函数,可以通过仅设置重要类别的余量来改善重要类别的分离,即称为类敏感的加性角度损失(CAMRI损失)。预计CAMRI的损失将减少重要类特征和权重相对于其他类别之间的角度方差,这是由于特征空间中重要类周围的边缘通过对角度增加惩罚的差额。此外,仅将惩罚集中在重要类别上几乎不会牺牲其他阶级的分离。在CIFAR-10,GTSRB和AWA2上进行的实验表明,该方法可以改善9%的回忆改善,而无需牺牲准确性。

In real-world applications of multi-class classification models, misclassification in an important class (e.g., stop sign) can be significantly more harmful than in other classes (e.g., speed limit). In this paper, we propose a loss function that can improve the recall of an important class while maintaining the same level of accuracy as the case using cross-entropy loss. For our purpose, we need to make the separation of the important class better than the other classes. However, existing methods that give a class-sensitive penalty for cross-entropy loss do not improve the separation. On the other hand, the method that gives a margin to the angle between the feature vectors and the weight vectors of the last fully connected layer corresponding to each feature can improve the separation. Therefore, we propose a loss function that can improve the separation of the important class by setting the margin only for the important class, called Class-sensitive Additive Angular Margin Loss (CAMRI Loss). CAMRI loss is expected to reduce the variance of angles between features and weights of the important class relative to other classes due to the margin around the important class in the feature space by adding a penalty to the angle. In addition, concentrating the penalty only on the important classes hardly sacrifices the separation of the other classes. Experiments on CIFAR-10, GTSRB, and AwA2 showed that the proposed method could improve up to 9% recall improvement on cross-entropy loss without sacrificing accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源