论文标题

用于数据到文本任务的文本到文本预培训

Text-to-Text Pre-Training for Data-to-Text Tasks

论文作者

Kale, Mihir, Rastogi, Abhinav

论文摘要

我们研究数据到文本任务的预训练 +微调策略。我们的实验表明,以T5的形式进行文本对文本预训练,可以使基于端到端的变压器模型均超过了针对数据到文本生成的管道上的神经体系结构,以及基于替代语言模型的替代前训练技术(例如BERT和GPT-2)。重要的是,T5预训练会导致更好的概括,这是对室外测试集的大量改进所证明的。我们希望我们的工作是未来研究的有用基准,因为转移学习变得越来越普遍。

We study the pre-train + fine-tune strategy for data-to-text tasks. Our experiments indicate that text-to-text pre-training in the form of T5, enables simple, end-to-end transformer based models to outperform pipelined neural architectures tailored for data-to-text generation, as well as alternative language model based pre-training techniques such as BERT and GPT-2. Importantly, T5 pre-training leads to better generalization, as evidenced by large improvements on out-of-domain test sets. We hope our work serves as a useful baseline for future research, as transfer learning becomes ever more prevalent for data-to-text tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源