论文标题

使用有损耗的标签不变转换来提高分类器置信度

Improving Classifier Confidence using Lossy Label-Invariant Transformations

论文作者

Jang, Sooyong, Lee, Insup, Weimer, James

论文摘要

提供可靠的模型不确定性估计对于实现自主代理人和人类的强大决策至关重要。尽管最近训练有素的模型在置信校准方面取得了重大进展,但在大多数校准模型中,校准较差的示例持续存在。因此,已经提出了多种技术,它利用输入的标签不变转换(即输入歧管)来改善最坏情况的置信度校准。但是,将基于多种置信度校准技术通常不扩展和/或需要昂贵的再培训,因为将其应用于具有较大输入空间的型号(例如ImageNet)。在本文中,我们介绍了递归损失标签不变的校准(recal)技术,该技术利用了输入的标签不变转换,这些转换会诱导歧视性信息损失,以递归分组(和校准)输入 - 而无需模型重新进行模型。我们表明,恢复性的表现优于多个数据集上的其他校准方法,尤其是在大规模数据集(例如ImageNet)上。

Providing reliable model uncertainty estimates is imperative to enabling robust decision making by autonomous agents and humans alike. While recently there have been significant advances in confidence calibration for trained models, examples with poor calibration persist in most calibrated models. Consequently, multiple techniques have been proposed that leverage label-invariant transformations of the input (i.e., an input manifold) to improve worst-case confidence calibration. However, manifold-based confidence calibration techniques generally do not scale and/or require expensive retraining when applied to models with large input spaces (e.g., ImageNet). In this paper, we present the recursive lossy label-invariant calibration (ReCal) technique that leverages label-invariant transformations of the input that induce a loss of discriminatory information to recursively group (and calibrate) inputs - without requiring model retraining. We show that ReCal outperforms other calibration methods on multiple datasets, especially, on large-scale datasets such as ImageNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源