论文标题

通过记忆模仿来改善低资源文本分类和生成的元学习

Improving Meta-learning for Low-resource Text Classification and Generation via Memory Imitation

论文作者

Zhao, Yingxiu, Tian, Zhiliang, Yao, Huaxiu, Zheng, Yinhe, Lee, Dongkyu, Song, Yiping, Sun, Jian, Zhang, Nevin L.

论文摘要

在只有有限的数据可用的低资源场景中,自然语言处理(NLP)的建立模型(NLP)是具有挑战性的。基于优化的元学习算法通过适应良好的模型初始化来处理新任务,从而在低资源场景中实现了有希望的结果。尽管如此,这些方法遭受了记忆过度拟合问题的困扰,在这种情况下,模型倾向于记住元训练任务,而在适应新任务时忽略了支持集。为了解决此问题,我们提出了一种内存模仿元学习(MEMIML)方法,该方法增强了模型对任务适应的支持集的依赖。具体来说,我们引入了一个特定于任务的内存模块来存储支持设置信息并构建一个模仿模块,以强制查询集,以模仿存储在存储器中的某些代表性支持集样本的行为。提供了理论分析以证明我们方法的有效性,经验结果还表明,我们的方法在文本分类和发电任务上都优于竞争基准。

Building models of natural language processing (NLP) is challenging in low-resource scenarios where only limited data are available. Optimization-based meta-learning algorithms achieve promising results in low-resource scenarios by adapting a well-generalized model initialization to handle new tasks. Nonetheless, these approaches suffer from the memorization overfitting issue, where the model tends to memorize the meta-training tasks while ignoring support sets when adapting to new tasks. To address this issue, we propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation. Specifically, we introduce a task-specific memory module to store support set information and construct an imitation module to force query sets to imitate the behaviors of some representative support-set samples stored in the memory. A theoretical analysis is provided to prove the effectiveness of our method, and empirical results also demonstrate that our method outperforms competitive baselines on both text classification and generation tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源