论文标题
标签平滑的礼物:通过标签噪声下的辅助分类器进行自适应标签平滑的稳健培训
A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise
论文作者
论文摘要
由于深度神经网络可以轻松地过度噪音标签,因此在嘈杂标签的存在下进行了强大的培训,正成为现代深度学习中的重要挑战。尽管现有方法在各个方向上解决了此问题,但它们仍然产生不可预测的亚最佳结果,因为它们依赖于嘈杂标签损坏的功能提取器估计的后验信息。 Lipschitz正则化成功地通过训练功能强大的提取器来成功减轻此问题,但需要更长的培训时间和昂贵的计算。在此激励的情况下,我们提出了一种称为Alasca的简单而有效的方法,该方法有效地在标签噪声下提供了可靠的特征提取器。 ALASCA整合了两种关键成分:(1)基于我们的理论分析的自适应标签平滑性,该分析标记平滑性会隐式诱导Lipschitz正则化,以及(2)辅助分类器,可实践使用中间Lipschitz正则化的实用,并可以使用可忽略的计算。我们对Alasca进行了广泛的实验,并将我们提出的方法与以前的几个合成和现实世界数据集的噪声方法相结合。实验结果表明,我们的框架始终提高提取器的鲁棒性和效率的现有基准的性能。我们的代码可在https://github.com/jongwooko/alasca上找到。
As deep neural networks can easily overfit noisy labels, robust training in the presence of noisy labels is becoming an important challenge in modern deep learning. While existing methods address this problem in various directions, they still produce unpredictable sub-optimal results since they rely on the posterior information estimated by the feature extractor corrupted by noisy labels. Lipschitz regularization successfully alleviates this problem by training a robust feature extractor, but it requires longer training time and expensive computations. Motivated by this, we propose a simple yet effective method, called ALASCA, which efficiently provides a robust feature extractor under label noise. ALASCA integrates two key ingredients: (1) adaptive label smoothing based on our theoretical analysis that label smoothing implicitly induces Lipschitz regularization, and (2) auxiliary classifiers that enable practical application of intermediate Lipschitz regularization with negligible computations. We conduct wide-ranging experiments for ALASCA and combine our proposed method with previous noise-robust methods on several synthetic and real-world datasets. Experimental results show that our framework consistently improves the robustness of feature extractors and the performance of existing baselines with efficiency. Our code is available at https://github.com/jongwooko/ALASCA.