论文标题

通过嵌入空间正则化和数据增强的持续几次射击关系学习

Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation

论文作者

Qin, Chengwei, Joty, Shafiq

论文摘要

现有的持续关系学习(CRL)方法依靠大量标记的培训数据来学习一项新任务,在实际情况下很难获得,因为获得大型且具有代表性的标签数据通常是昂贵且耗时的。因此,该模型必须使用很少的标记数据学习新颖的关系模式,同时避免灾难性忘记以前的任务知识。在本文中,我们将这个具有挑战性但实用的问题提出为持续不断的关系学习(CFRL)。基于以下发现:新出现的少量任务的学习通常会导致特征分布与以前任务的学识关分布不相容,我们提出了一种基于嵌入空间正则化和数据增强的新方法。我们的方法概括为新的几次任务,并避免通过对关系嵌入的额外约束以及以自我监督的方式添加额外的{相关}数据,从而避免了对先前任务的灾难性忘记。通过广泛的实验,我们证明我们的方法可以显着超过CFRL任务设置中先前最先前的方法。

Existing continual relation learning (CRL) methods rely on plenty of labeled training data for learning a new task, which can be hard to acquire in real scenario as getting large and representative labeled data is often expensive and time-consuming. It is therefore necessary for the model to learn novel relational patterns with very few labeled data while avoiding catastrophic forgetting of previous task knowledge. In this paper, we formulate this challenging yet practical problem as continual few-shot relation learning (CFRL). Based on the finding that learning for new emerging few-shot tasks often results in feature distributions that are incompatible with previous tasks' learned distributions, we propose a novel method based on embedding space regularization and data augmentation. Our method generalizes to new few-shot tasks and avoids catastrophic forgetting of previous tasks by enforcing extra constraints on the relational embeddings and by adding extra {relevant} data in a self-supervised manner. With extensive experiments we demonstrate that our method can significantly outperform previous state-of-the-art methods in CFRL task settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源