论文标题

向后看:通过双向解码器进行神经机器翻译的自我知识蒸馏

Look Backward and Forward: Self-Knowledge Distillation with Bidirectional Decoder for Neural Machine Translation

论文作者

Zhang, Xuanwei, Shen, Libin, Pan, Disheng, Wang, Liang, Miao, Yanjun

论文摘要

神经机器翻译(NMT)模型通常是通过单向解码器训练的,该解码器对应于优化单步预测。但是,这种单向解码框架可能倾向于专注于局部结构而不是全球连贯性。为了减轻这个问题,我们提出了一种新的方法,即用于神经机器翻译(SBD-NMT)的双向解码器的自我知识蒸馏。我们部署了一个向后解码器,该解码器可以充当前向解码器的有效正则化方法。通过利用向后解码器有关长期未来的信息,向后解码器中学到的蒸馏知识可以鼓励自动回归NMT模型进行计划。实验表明,我们的方法明显好于多个机器翻译数据集上强的变压器基线。

Neural Machine Translation(NMT) models are usually trained via unidirectional decoder which corresponds to optimizing one-step-ahead prediction. However, this kind of unidirectional decoding framework may incline to focus on local structure rather than global coherence. To alleviate this problem, we propose a novel method, Self-Knowledge Distillation with Bidirectional Decoder for Neural Machine Translation(SBD-NMT). We deploy a backward decoder which can act as an effective regularization method to the forward decoder. By leveraging the backward decoder's information about the longer-term future, distilling knowledge learned in the backward decoder can encourage auto-regressive NMT models to plan ahead. Experiments show that our method is significantly better than the strong Transformer baselines on multiple machine translation data sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源