论文标题

NMT可以理解我吗?朝基于扰动的NMT模型评估代码生成

Can NMT Understand Me? Towards Perturbation-based Evaluation of NMT Models for Code Generation

论文作者

Liguori, Pietro, Improta, Cristina, De Vivo, Simona, Natella, Roberto, Cukic, Bojan, Cotroneo, Domenico

论文摘要

神经机器翻译(NMT)已达到一定程度的成熟度,被认为是不同语言之间翻译和在包括软件工程在内的不同研究领域引起的兴趣的主要方法。验证NMT模型的鲁棒性的关键步骤在于评估模型在对抗输入上的性能,即通过添加少量的扰动来从原始的输入中获得的输入。但是,在处理代码生成的特定任务(即,从自然语言的描述开始的代码生成)时,尚未定义一种方法来验证NMT模型的鲁棒性。在这项工作中,我们通过确定一组扰动和针对此类模型稳健性评估的扰动和指标来解决问题。我们提出了初步的实验评估,显示了哪种类型的扰动影响模型最大,并为将来的方向带来了有用的见解。

Neural Machine Translation (NMT) has reached a level of maturity to be recognized as the premier method for the translation between different languages and aroused interest in different research areas, including software engineering. A key step to validate the robustness of the NMT models consists in evaluating the performance of the models on adversarial inputs, i.e., inputs obtained from the original ones by adding small amounts of perturbation. However, when dealing with the specific task of the code generation (i.e., the generation of code starting from a description in natural language), it has not yet been defined an approach to validate the robustness of the NMT models. In this work, we address the problem by identifying a set of perturbations and metrics tailored for the robustness assessment of such models. We present a preliminary experimental evaluation, showing what type of perturbations affect the model the most and deriving useful insights for future directions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源