论文标题
Skillnet-NLU:一种通用自然语言理解的稀疏激活模型
SkillNet-NLU: A Sparsely Activated Model for General-Purpose Natural Language Understanding
论文作者
论文摘要
盛行的深层模型是单点的,并且在单个任务上过度专业。但是,当扩展到新任务时,他们通常会忘记以前学到的技能并从头开始学习。我们通过介绍Skillnet-NLU来解决这个问题,这是一种通用模型,该模型将现有技能缝在一起以更有效地学习新任务。我们方法的关键特征是,它在预定义的技能的指导下被稀疏地激活。与始终激活所有模型参数的传统密集模型不同,SkillNet-NLU仅激活与目标任务相关的模型参数的一部分。在学习新任务时,我们的方法会精确地激活所需的技能,还提供了增加新技能的选项。我们评估自然语言理解任务,并具有以下发现。首先,只有一个模型检查点,SkillNet-NLU的性能要比特定于任务的微调和两个多任务学习基准(即密集的模型和混合型号模型)更好。其次,稀疏激活的预训练进一步改善了整体性能。第三,在扩展到新任务时,SkillNet-NLU明显胜过基线系统。
Prevailing deep models are single-purpose and overspecialize at individual tasks. However, when being extended to new tasks, they typically forget previously learned skills and learn from scratch. We address this issue by introducing SkillNet-NLU, a general-purpose model that stitches together existing skills to learn new tasks more effectively. The key feature of our approach is that it is sparsely activated guided by predefined skills. Different from traditional dense models that always activate all the model parameters, SkillNet-NLU only activates parts of the model parameters whose skills are relevant to the target task. When learning for a new task, our approach precisely activates required skills and also provides an option to add new skills. We evaluate on natural language understandings tasks and have the following findings. First, with only one model checkpoint, SkillNet-NLU performs better than task-specific fine-tuning and two multi-task learning baselines (i.e., dense model and Mixture-of-Experts model) on six tasks. Second, sparsely activated pre-training further improves the overall performance. Third, SkillNet-NLU significantly outperforms baseline systems when being extended to new tasks.