论文标题
Hilmeme:一种人类在循环的机器翻译评估度量指标
HilMeMe: A Human-in-the-Loop Machine Translation Evaluation Metric Looking into Multi-Word Expressions
论文作者
论文摘要
随着机器翻译(MT)系统的快速开发,尤其是神经MT(NMT)模型的新提升,MT输出质量已达到新的准确性水平。但是,许多研究人员批评当前的流行评估指标(例如BLEU)无法正确区分有关质量差异的最新NMT系统。在这篇简短的论文中,我们描述了以语言动机的人类在循环评估度量的设计和实施,以研究惯用和术语多字表达式(MWES)。 MWE在包括MT在内的许多自然语言处理(NLP)任务中扮演了瓶颈。 MWE可以通过以准确和含义的等效方式研究其在识别和翻译MWES方面的能力来区分不同的MT系统的主要因素之一。
With the fast development of Machine Translation (MT) systems, especially the new boost from Neural MT (NMT) models, the MT output quality has reached a new level of accuracy. However, many researchers criticised that the current popular evaluation metrics such as BLEU can not correctly distinguish the state-of-the-art NMT systems regarding quality differences. In this short paper, we describe the design and implementation of a linguistically motivated human-in-the-loop evaluation metric looking into idiomatic and terminological Multi-word Expressions (MWEs). MWEs have played a bottleneck in many Natural Language Processing (NLP) tasks including MT. MWEs can be used as one of the main factors to distinguish different MT systems by looking into their capabilities in recognising and translating MWEs in an accurate and meaning equivalent manner.