论文标题

ALIGN-MLM:单词嵌入对齐对于多语言预训练至关重要

ALIGN-MLM: Word Embedding Alignment is Crucial for Multilingual Pre-training

论文作者

Tang, Henry, Deshpande, Ameet, Narasimhan, Karthik

论文摘要

多语言预训练的模型表现出零拍的跨语性转移,在源语言上进行了微调的模型可以在目标语言上实现出令人惊讶的良好性能。尽管研究试图理解转移,但它们仅关注传销,而自然语言之间的大量差异使得很难解散不同特性的重要性。在这项工作中,我们特别强调了单词嵌入对齐方式的重要性,它提出了一个预训练目标(Align-MLM),其辅助损失指导不同语言的类似单词以具有相似的单词嵌入。当我们评估通过系统修改脚本(如脚本)的系统修改特定属性创建的自然语言及其对应物之间的转移时,对准MLM的表现要胜过表现要比三个广泛采用的目标(MLM,XLM,DICS-MLM)匹配。特别是,在pos-tagging上,对XLM和MLM的表现优于35和30 f1点,以在其脚本和单词顺序(左右v.s.s.s.s.s.s.s.s.s.s)之间转移。我们还显示了所有目标(例如,XNLI的Rho = 0.727)之间的对齐与传输之间的密切相关性,这与Align-MLM的强绩效呼吁一起明确地对准多语言模型的单词嵌入。

Multilingual pre-trained models exhibit zero-shot cross-lingual transfer, where a model fine-tuned on a source language achieves surprisingly good performance on a target language. While studies have attempted to understand transfer, they focus only on MLM, and the large number of differences between natural languages makes it hard to disentangle the importance of different properties. In this work, we specifically highlight the importance of word embedding alignment by proposing a pre-training objective (ALIGN-MLM) whose auxiliary loss guides similar words in different languages to have similar word embeddings. ALIGN-MLM either outperforms or matches three widely adopted objectives (MLM, XLM, DICT-MLM) when we evaluate transfer between pairs of natural languages and their counterparts created by systematically modifying specific properties like the script. In particular, ALIGN-MLM outperforms XLM and MLM by 35 and 30 F1 points on POS-tagging for transfer between languages that differ both in their script and word order (left-to-right v.s. right-to-left). We also show a strong correlation between alignment and transfer for all objectives (e.g., rho=0.727 for XNLI), which together with ALIGN-MLM's strong performance calls for explicitly aligning word embeddings for multilingual models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源