论文标题

任务适应的显着性指导无示例性课程增量学习

Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning

论文作者

Liu, Xialei, Zhai, Jiang-Tian, Bagdanov, Andrew D., Li, Ke, Cheng, Ming-Ming

论文摘要

无示例性类增量学习(EFCIL)旨在仅使用当前数据的数据访问访问数据。 EFCIL引起了人们的关注,因为它减轻了对数据的隐私和长期存储的担忧,同时减轻了逐步学习中灾难性遗忘的问题。在这项工作中,我们为EFCIL介绍了任务自适应的显着性,并提出了一个新框架,我们将其称为任务自适应显着性监督(TASS),以减轻不同任务之间显着性漂移的负面影响。我们首先应用边界指导的显着性来维持任务适应性和\ textit {可塑性}对模型的关注。此外,我们将任务无关的低级信号作为辅助监督,以增加模型注意的\ textit {稳定性}。最后,我们引入了一个用于注入和恢复显着噪声的模块,以提高显着性的鲁棒性。我们的实验表明,我们的方法可以更好地保留跨任务的显着性图,并在CIFAR-100,Tiny-Imagenet和Imagenet-Subsubset EFCIL基准上获得最新的结果。代码可在\ url {https://github.com/scok30/tass}上找到。

Exemplar-free Class Incremental Learning (EFCIL) aims to sequentially learn tasks with access only to data from the current one. EFCIL is of interest because it mitigates concerns about privacy and long-term storage of data, while at the same time alleviating the problem of catastrophic forgetting in incremental learning. In this work, we introduce task-adaptive saliency for EFCIL and propose a new framework, which we call Task-Adaptive Saliency Supervision (TASS), for mitigating the negative effects of saliency drift between different tasks. We first apply boundary-guided saliency to maintain task adaptivity and \textit{plasticity} on model attention. Besides, we introduce task-agnostic low-level signals as auxiliary supervision to increase the \textit{stability} of model attention. Finally, we introduce a module for injecting and recovering saliency noise to increase the robustness of saliency preservation. Our experiments demonstrate that our method can better preserve saliency maps across tasks and achieve state-of-the-art results on the CIFAR-100, Tiny-ImageNet, and ImageNet-Subset EFCIL benchmarks. Code is available at \url{https://github.com/scok30/tass}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源