论文标题

无示例性在线持续学习

Exemplar-free Online Continual Learning

论文作者

He, Jiangpeng, Zhu, Fengqing

论文摘要

针对现实世界的针对性,在线持续学习旨在从学习者仅观察到每个数据一次的情况下从依次可用的数据中学习新任务。尽管最近的作品通过将一部分学习的任务数据作为知识重播的典范取得了显着的成就,但性能极大地依赖于存储的典范的大小,而存储消耗是持续学习的重大限制。此外,由于隐私问题,存储示例可能并不总是可行的。在这项工作中,我们通过利用最近级别均值(NCM)分类器提出了一种新颖的无示例方法,在培训阶段,均通过在线均值更新标准估算了所有数据。我们专注于图像分类任务,并在包括CIFAR-100和Food-1k在内的基准数据集上进行大量实验。结果表明,在不使用任何示例的情况下,我们的方法在标准协议下具有较大利润的基于示例的方法(每类20个示例),并且即使具有较大的示例大小(每班100个示例)也能够实现竞争性能。

Targeted for real world scenarios, online continual learning aims to learn new tasks from sequentially available data under the condition that each data is observed only once by the learner. Though recent works have made remarkable achievements by storing part of learned task data as exemplars for knowledge replay, the performance is greatly relied on the size of stored exemplars while the storage consumption is a significant constraint in continual learning. In addition, storing exemplars may not always be feasible for certain applications due to privacy concerns. In this work, we propose a novel exemplar-free method by leveraging nearest-class-mean (NCM) classifier where the class mean is estimated during training phase on all data seen so far through online mean update criteria. We focus on image classification task and conduct extensive experiments on benchmark datasets including CIFAR-100 and Food-1k. The results demonstrate that our method without using any exemplar outperforms state-of-the-art exemplar-based approaches with large margins under standard protocol (20 exemplars per class) and is able to achieve competitive performance even with larger exemplar size (100 exemplars per class).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源