论文标题

关于跨域持续学习的超越领域的概括

On Generalizing Beyond Domains in Cross-Domain Continual Learning

论文作者

Simon, Christian, Faraki, Masoud, Tsai, Yi-Hsuan, Yu, Xiang, Schulter, Samuel, Suh, Yumin, Harandi, Mehrtash, Chandraker, Manmohan

论文摘要

人类具有在不同条件下积累新任务知识的能力,但是深层的神经网络经常在学习新任务后遭受灾难性忘记以前学习的知识。许多最近的方法着重于在类似分布之后的列车和测试数据的假设下防止灾难性遗忘。在这项工作中,我们考虑了在域转移下持续学习的更现实的情况,模型必须将其推断为看不见的领域。为此,我们鼓励通过将分类器与类相似性指标作为学习参数的学习来学习语义有意义的特征,这些参数是通过Mahalanobis相似性计算获得的。学习主干表示以及这些额外的参数是以端到端方式无缝完成的。此外,我们提出了一种基于参数的指数移动平均值,以提供更好的知识蒸馏。我们证明,在很大程度上,现有的持续学习算法无法处理多个分布的遗忘问题,而我们的拟议方法在域转移下学习新任务,准确率在诸如domainnet和OfficeHome等具有挑战性的数据集中可以提高10%。

Humans have the ability to accumulate knowledge of new tasks in varying conditions, but deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task. Many recent methods focus on preventing catastrophic forgetting under the assumption of train and test data following similar distributions. In this work, we consider a more realistic scenario of continual learning under domain shifts where the model must generalize its inference to an unseen domain. To this end, we encourage learning semantically meaningful features by equipping the classifier with class similarity metrics as learning parameters which are obtained through Mahalanobis similarity computations. Learning of the backbone representation along with these extra parameters is done seamlessly in an end-to-end manner. In addition, we propose an approach based on the exponential moving average of the parameters for better knowledge distillation. We demonstrate that, to a great extent, existing continual learning algorithms fail to handle the forgetting issue under multiple distributions, while our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源