论文标题

自我监督的学习帮助课堂学习终身学习

Self-Supervised Learning Aided Class-Incremental Lifelong Learning

论文作者

Zhang, Song, Shen, Gehui, Huang, Jinsong, Deng, Zhi-Hong

论文摘要

对于人工神经网络而言,终身或持续学习仍然是一个挑战,因为必须既稳定地保存旧知识,又需要稳定以获取新知识。通常看到以前的经历被覆盖,这导致了众所周知的灾难性遗忘问题,尤其是在课堂学习学习的情况下(class-iL)。最近,已经提出了许多终身学习方法,以避免灾难性的遗忘。但是,在不重播输入数据的情况下学习的模型将遇到另一个被忽略的问题,我们将其称为先前的信息丢失(pil)。在IL类的培训程序中,由于该模型对以下任务不了解,因此只能提取到目前为止所学到的任务所需的功能,该任务的信息不足以进行联合分类。在本文中,我们对几个图像数据集的经验结果表明,PIL限制了类-IL类最新方法的性能,即正交权重修改(OWM)算法。此外,我们建议将自我监督的学习结合起来,这些学习可以提供有效的表示,而无需标签,而IL级别则可以部分解决这个问题。实验显示了提出的方法对OWM的优势以及其他强基础。

Lifelong or continual learning remains to be a challenge for artificial neural network, as it is required to be both stable for preservation of old knowledge and plastic for acquisition of new knowledge. It is common to see previous experience get overwritten, which leads to the well-known issue of catastrophic forgetting, especially in the scenario of class-incremental learning (Class-IL). Recently, many lifelong learning methods have been proposed to avoid catastrophic forgetting. However, models which learn without replay of the input data, would encounter another problem which has been ignored, and we refer to it as prior information loss (PIL). In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification. In this paper, our empirical results on several image datasets show that PIL limits the performance of current state-of-the-art method for Class-IL, the orthogonal weights modification (OWM) algorithm. Furthermore, we propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem. Experiments show superiority of proposed method to OWM, as well as other strong baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源