论文标题

用于分布外检测的规范尺度

Norm-Scaling for Out-of-Distribution Detection

论文作者

Ravikumar, Deepak, Roy, Kaushik

论文摘要

分发(OOD)输入是不属于数据集的真实基础分布的示例。研究表明,深度神经网对OOD输入的自信错误进行了错误的预测。因此,至关重要的是要确定OOD投入以安全可靠的深度神经网络部署。通常,在相似得分上应用阈值以检测OOD输入。一种相似性是角度相似性,它是潜在表示与平均类表示的点产物。角度相似性编码不确定性,例如,如果角度相似性较小,则不太确定输入属于该类别。但是,我们观察到,不同类别的角度相似性具有不同的分布。因此,为所有类应用一个阈值并不理想,因为相同的相似性得分代表不同类别的不同不确定性。在本文中,我们提出了规范尺度,该规范尺度将每个类别的逻辑归一化。这确保单个值一致地表示各个类别的不确定性。我们表明,当与最大软马克斯概率检测器一起使用时,量规定可提高AUROC的9.78%,而AUPR提高了5.99%,而FPR95度量的量度比以前的最先前的ART方法降低了33.19%。

Out-of-Distribution (OoD) inputs are examples that do not belong to the true underlying distribution of the dataset. Research has shown that deep neural nets make confident mispredictions on OoD inputs. Therefore, it is critical to identify OoD inputs for safe and reliable deployment of deep neural nets. Often a threshold is applied on a similarity score to detect OoD inputs. One such similarity is angular similarity which is the dot product of latent representation with the mean class representation. Angular similarity encodes uncertainty, for example, if the angular similarity is less, it is less certain that the input belongs to that class. However, we observe that, different classes have different distributions of angular similarity. Therefore, applying a single threshold for all classes is not ideal since the same similarity score represents different uncertainties for different classes. In this paper, we propose norm-scaling which normalizes the logits separately for each class. This ensures that a single value consistently represents similar uncertainty for various classes. We show that norm-scaling, when used with maximum softmax probability detector, achieves 9.78% improvement in AUROC, 5.99% improvement in AUPR and 33.19% reduction in FPR95 metrics over previous state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源