论文标题

学习文档级神经机器翻译的上下文化句子表示

Learning Contextualized Sentence Representations for Document-Level Neural Machine Translation

论文作者

Zhang, Pei, Zhang, Xu, Chen, Wei, Yu, Jian, Wang, Yanfeng, Xiong, Deyi

论文摘要

文档级的机器翻译将索引间依赖项结合到源句子的翻译中。在本文中,我们提出了一个新的框架,通过训练神经机器翻译(NMT)来预测源句子的目标翻译和周围句子,以建模跨句子依赖性。通过强制执行NMT模型预测源上下文,我们希望该模型可以学习捕获文档级依赖的“上下文化”源句子表示。我们进一步提出了两种不同的方法来学习和集成这种上下文化的句子嵌入到NMT中:一种联合培训方法,该方法共同培训了具有源上下文预测模型的NMT模型以及一种预训练和微调方法,该方法可以在大规模单语录文档Corpus上预定源上下文预测模型,然后使用NMT模型进行微调模型。中文英语和英语 - 德语翻译的实验表明,这两种方法都可以大大提高强大文档级变压器基线的翻译质量。

Document-level machine translation incorporates inter-sentential dependencies into the translation of a source sentence. In this paper, we propose a new framework to model cross-sentence dependencies by training neural machine translation (NMT) to predict both the target translation and surrounding sentences of a source sentence. By enforcing the NMT model to predict source context, we want the model to learn "contextualized" source sentence representations that capture document-level dependencies on the source side. We further propose two different methods to learn and integrate such contextualized sentence embeddings into NMT: a joint training method that jointly trains an NMT model with the source context prediction model and a pre-training & fine-tuning method that pretrains the source context prediction model on a large-scale monolingual document corpus and then fine-tunes it with the NMT model. Experiments on Chinese-English and English-German translation show that both methods can substantially improve the translation quality over a strong document-level Transformer baseline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源