论文标题

通过消除负面对比来解决在线单级增量学习

Tackling Online One-Class Incremental Learning by Removing Negative Contrasts

论文作者

Asadi, Nader, Mudur, Sudhir, Belilovsky, Eugene

论文摘要

最近的工作研究是监督的在线持续学习环境,其中学习者会收到一系列数据流,其课程分布会随着时间而变化。与其他持续学习设置不同,学习者只会呈现一次新样本,必须区分所有可见的类。在此设置中,许多成功的方法集中于以计算有效的方式存储和重播样本的子集以及传入的数据。在这种情况下,通过将基于对比度学习的不对称损失应用于传入数据并重播数据,ER-AML在这种情况下实现了强劲的性能。但是,所提出方法的关键要素是避免传入数据和存储数据之间的对比度,这使得在流中每个阶段仅引入一个新类的设置是不切实际的。在这项工作中,我们将最近提出的方法(\ textit {byol})从自学的学习到监督的学习设置,解锁对比的约束。然后,我们表明,补充类原型的其他正规化会产生一种新方法,该方法在单级增量学习设置中实现了强劲的性能,并且在多级增量设置中具有最佳性能方法具有竞争力。

Recent work studies the supervised online continual learning setting where a learner receives a stream of data whose class distribution changes over time. Distinct from other continual learning settings the learner is presented new samples only once and must distinguish between all seen classes. A number of successful methods in this setting focus on storing and replaying a subset of samples alongside incoming data in a computationally efficient manner. One recent proposal ER-AML achieved strong performance in this setting by applying an asymmetric loss based on contrastive learning to the incoming data and replayed data. However, a key ingredient of the proposed method is avoiding contrasts between incoming data and stored data, which makes it impractical for the setting where only one new class is introduced in each phase of the stream. In this work we adapt a recently proposed approach (\textit{BYOL}) from self-supervised learning to the supervised learning setting, unlocking the constraint on contrasts. We then show that supplementing this with additional regularization on class prototypes yields a new method that achieves strong performance in the one-class incremental learning setting and is competitive with the top performing methods in the multi-class incremental setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源