论文标题

查询键的变压器归一化

Query-Key Normalization for Transformers

论文作者

Henry, Alex, Dachapally, Prudhvi Raj, Pawar, Shubham, Chen, Yuxuan

论文摘要

低资源语言翻译是一项具有挑战性但对社会有价值的NLP任务。在最新工作的基础上,我们提出了QKNORM,它提出了QKNORM,这是一种正常化的技术,可修改注意力机制,使软马克斯的功能不太容易出现在不牺牲表达的情况下的任意饱和度。具体来说,我们在每个查询和键矩阵乘以乘以$ \ ell_2 $归一化,然后再乘以它们,然后按可学习的参数扩展,而不是除以嵌入尺寸的平方根。我们显示的改进量平均比最先进的双语基准测试了0.928 BLEU,用于从TED Talks Corpus和IWSLT'15的5个低资源翻译对。

Low-resource language translation is a challenging but socially valuable NLP task. Building on recent work adapting the Transformer's normalization to this setting, we propose QKNorm, a normalization technique that modifies the attention mechanism to make the softmax function less prone to arbitrary saturation without sacrificing expressivity. Specifically, we apply $\ell_2$ normalization along the head dimension of each query and key matrix prior to multiplying them and then scale up by a learnable parameter instead of dividing by the square root of the embedding dimension. We show improvements averaging 0.928 BLEU over state-of-the-art bilingual benchmarks for 5 low-resource translation pairs from the TED Talks corpus and IWSLT'15.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源