论文标题

准确度量的鲁棒性及其在学习中的灵感噪音标签的灵感

Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels

论文作者

Chen, Pengfei, Ye, Junjie, Chen, Guangyong, Zhao, Jingwei, Heng, Pheng-Ann

论文摘要

对于在类条件标签噪声下的多类分类,我们证明准确性度量本身可以稳健。我们将这一发现的灵感在两个基本方面:培训和验证中融为一体,我们使用嘈杂的标签来解决关键问题。对于训练,我们表明,在足够多的嘈杂样本上最大化培训准确性可产生大约最佳的分类器。为了进行验证,我们证明了一个嘈杂的验证集是可靠的,可以解决模型选择的关键需求,例如高参数调整和早期停止。以前,使用嘈杂验证样本的模型选择在理论上尚未证明是合理的。我们通过广泛的实验来验证我们的理论结果和其他主张。我们展示了由嘈杂标签训练的模型的特征,这是由我们的理论结果激励的,并通过显示出令人印象深刻的框架表现为嘈杂的噪声最佳老师和学生(NTS),从而验证了设定的嘈杂验证的实用性。我们的代码已发布。

For multi-class classification under class-conditional label noise, we prove that the accuracy metric itself can be robust. We concretize this finding's inspiration in two essential aspects: training and validation, with which we address critical issues in learning with noisy labels. For training, we show that maximizing training accuracy on sufficiently many noisy samples yields an approximately optimal classifier. For validation, we prove that a noisy validation set is reliable, addressing the critical demand of model selection in scenarios like hyperparameter-tuning and early stopping. Previously, model selection using noisy validation samples has not been theoretically justified. We verify our theoretical results and additional claims with extensive experiments. We show characterizations of models trained with noisy labels, motivated by our theoretical results, and verify the utility of a noisy validation set by showing the impressive performance of a framework termed noisy best teacher and student (NTS). Our code is released.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源