论文标题

连续通过动态多任务学习生成的问题

Consecutive Question Generation via Dynamic Multitask Learning

论文作者

Li, Yunji, Li, Sujian, Shi, Xing

论文摘要

在本文中,我们提出了连续问题生成(CQG)的任务,该任务生成了一组逻辑上相关的问题 - 答案对,以了解整个段落,并全面考虑了包括准确性,覆盖范围和信息性的方面。为了实现这一目标,我们首先检查了CQG的四个关键要素,即问题,答案,理由和上下文历史记录,并提出了一个新颖的动态多任务框架,其中一个主要任务生成了一个问答对,以及四个生成其他元素的辅助任务。它可以直接通过联合培训和自我控制来帮助模型产生良好的问题。同时,为了充分探索给定段落中的有价值的信息,我们利用重新的损失来采样理由并在全球范围内搜索最佳的问题序列。最后,我们通过质量检查数据扩展和手动评估来衡量我们的策略,以及在DOCNLI上生成的提问对的新应用。我们证明我们的策略可以显着改善问题的产生,并使多个相关的NLP任务受益。

In this paper, we propose the task of consecutive question generation (CQG), which generates a set of logically related question-answer pairs to understand a whole passage, with a comprehensive consideration of the aspects including accuracy, coverage, and informativeness. To achieve this, we first examine the four key elements of CQG, i.e., question, answer, rationale, and context history, and propose a novel dynamic multitask framework with one main task generating a question-answer pair, and four auxiliary tasks generating other elements. It directly helps the model generate good questions through both joint training and self-reranking. At the same time, to fully explore the worth-asking information in a given passage, we make use of the reranking losses to sample the rationales and search for the best question series globally. Finally, we measure our strategy by QA data augmentation and manual evaluation, as well as a novel application of generated question-answer pairs on DocNLI. We prove that our strategy can improve question generation significantly and benefit multiple related NLP tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源