论文标题

多目标插值训练鲁棒性以标记噪声

Multi-Objective Interpolation Training for Robustness to Label Noise

论文作者

Ortego, Diego, Arazo, Eric, Albert, Paul, O'Connor, Noel E., McGuinness, Kevin

论文摘要

经过标准的跨凝性损失训练的深度神经网络记住嘈杂的标签,这会降低其性能。减轻这种记忆的大多数研究都提出了新的鲁棒分类损失功能。相反,我们提出了一种多目标插值训练(MOIT)方法,该方法共同利用对比度学习和分类,以相互帮助并促进针对标签噪声的性能。我们表明,标准监督的对比度学习在存在标签噪声的情况下降低了,并提出了一种插值训练策略来减轻这种行为。我们进一步提出了一种新型的标签噪声检测方法,该方法利用通过对比度学习学到的鲁棒特征表示,以估计每样本软标签,其与原始标签的分歧准确地识别出嘈杂的样本。该检测允许将嘈杂的样本视为未标记的样本,并以半监督的方式训练分类器,以防止噪声记忆并改善表示表示。我们进一步提出了MoIt+,通过对检测到的干净样品进行微调来改进MoIT。超参数和消融研究验证了我们方法的关键组成部分。关于合成和现实世界噪声基准的实验表明,MoIT/MoIT+实现了最先进的结果。代码可在https://git.io/ji40x上找到。

Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades their performance. Most research to mitigate this memorization proposes new robust classification loss functions. Conversely, we propose a Multi-Objective Interpolation Training (MOIT) approach that jointly exploits contrastive learning and classification to mutually help each other and boost performance against label noise. We show that standard supervised contrastive learning degrades in the presence of label noise and propose an interpolation training strategy to mitigate this behavior. We further propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples. This detection allows treating noisy samples as unlabeled and training a classifier in a semi-supervised manner to prevent noise memorization and improve representation learning. We further propose MOIT+, a refinement of MOIT by fine-tuning on detected clean samples. Hyperparameter and ablation studies verify the key components of our method. Experiments on synthetic and real-world noise benchmarks demonstrate that MOIT/MOIT+ achieves state-of-the-art results. Code is available at https://git.io/JI40X.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源