论文标题

基于图的持续学习

Graph-Based Continual Learning

论文作者

Tang, Binh, Matteson, David S.

论文摘要

尽管取得了重大进展,但在暴露于非平稳分布的逐步可用数据时,持续的学习模型仍然会遭受灾难性的遗忘。彩排方法通过维护和重播对先前样本的少量记忆来减轻问题,这通常是作为一系列独立记忆插槽实现的。在这项工作中,我们建议用可学习的随机图来增强这样的数组,该图形在样本之间捕获成对的相似性,并不仅使用它来学习新任务,而且还可以防止忘记。几个基准数据集的经验结果表明,我们的模型始终胜过最近提出的无任务持续学习的基准。

Despite significant advances, continual learning models still suffer from catastrophic forgetting when exposed to incrementally available data from non-stationary distributions. Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an array of independent memory slots. In this work, we propose to augment such an array with a learnable random graph that captures pairwise similarities between its samples, and use it not only to learn new tasks but also to guard against forgetting. Empirical results on several benchmark datasets show that our model consistently outperforms recently proposed baselines for task-free continual learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源