论文标题
可扩展的对抗性在线持续学习
Scalable Adversarial Online Continual Learning
论文作者
论文摘要
对抗性持续学习对于持续学习问题有效,因为存在特征对齐过程,从而产生了对灾难性遗忘问题敏感性较低的任务不变特征。然而,ACL方法构成了相当大的复杂性,因为它依赖于特定于任务的网络和歧视器。它还经历了一个迭代培训过程,该过程不适合在线(单期)持续学习问题。本文提出了一种可扩展的对抗性持续学习(比例)方法,提出了一个参数生成器,将共同特征转换为特定于任务的功能,并在对抗性游戏中进行单个歧视器,以引起共同的特征。使用三个损失功能的新组合在元学习时尚中进行训练过程。在精度和执行时间内,缩放比较优于突出的基线。
Adversarial continual learning is effective for continual learning problems because of the presence of feature alignment process generating task-invariant features having low susceptibility to the catastrophic forgetting problem. Nevertheless, the ACL method imposes considerable complexities because it relies on task-specific networks and discriminators. It also goes through an iterative training process which does not fit for online (one-epoch) continual learning problems. This paper proposes a scalable adversarial continual learning (SCALE) method putting forward a parameter generator transforming common features into task-specific features and a single discriminator in the adversarial game to induce common features. The training process is carried out in meta-learning fashions using a new combination of three loss functions. SCALE outperforms prominent baselines with noticeable margins in both accuracy and execution time.