论文标题
变压器的动态位置编码
Dynamic Position Encoding for Transformers
论文作者
论文摘要
在过去的几年中,复发模型一直在神经机器翻译(NMT)的领域主导。变形金刚\ citep {vaswani2017发言}通过提出一种依赖馈送前向主链和自我发项机制的新型结构,从根本上改变了它。尽管变压器功能强大,但由于其非持续性,它们可能无法正确编码顺序/位置信息。为了解决此问题,仅针对每个时间步骤定义了位置嵌入,以丰富单词信息。但是,无论任务和目标语言的任务和单词排序系统如何,这种嵌入在训练后是固定的。 在本文中,我们提出了一种具有新位置嵌入的新型体系结构,具体取决于输入文本,以考虑目标词的顺序解决这一缺点。我们的解决方案不是使用预定义的位置嵌入,而是生成新的嵌入式来完善每个单词的位置信息。由于我们不决定源代币的位置并以端到端的方式学习它们,因此我们将我们的方法称为动态位置编码(DPE)。我们评估了模型对多个数据集的影响,将英语转化为德语,法语和意大利语,并观察到与原始变压器相比的有意义的改进。
Recurrent models have been dominating the field of neural machine translation (NMT) for the past few years. Transformers \citep{vaswani2017attention}, have radically changed it by proposing a novel architecture that relies on a feed-forward backbone and self-attention mechanism. Although Transformers are powerful, they could fail to properly encode sequential/positional information due to their non-recurrent nature. To solve this problem, position embeddings are defined exclusively for each time step to enrich word information. However, such embeddings are fixed after training regardless of the task and the word ordering system of the source or target language. In this paper, we propose a novel architecture with new position embeddings depending on the input text to address this shortcoming by taking the order of target words into consideration. Instead of using predefined position embeddings, our solution generates new embeddings to refine each word's position information. Since we do not dictate the position of source tokens and learn them in an end-to-end fashion, we refer to our method as dynamic position encoding (DPE). We evaluated the impact of our model on multiple datasets to translate from English into German, French, and Italian and observed meaningful improvements in comparison to the original Transformer.