论文标题

关于低资源编程语言的预训练语言模型的可转让性

On the Transferability of Pre-trained Language Models for Low-Resource Programming Languages

论文作者

Chen, Fuxiang, Fard, Fatemeh, Lo, David, Bryksin, Timofey

论文摘要

艾哈迈德(Ahmed)和德文布(Devanbu)最近的一项研究报告说,使用在多语言数据集中编写的代码语料库进行微调多语言预训练的语言模型(PLMS)实现了更高的性能,而不是使用仅使用一种编程语言编写的代码语料库。但是,没有对微调单语PLM进行分析。此外,某些编程语言本质上是不同的,并且用一种语言编写的代码通常不能与其他语言互换,即Ruby和Java代码具有截然不同的结构。为了更好地了解单语言和多语言PLM如何影响不同的编程语言,我们研究1)PLM在Ruby上的性能,用于两个流行的软件工程任务:代码摘要和代码搜索,2)策略(选择编程语言)(选择编程语言),可在Ruby的微调多语言PLM上效果很好,以及3)在Ruby上提供了微调PLMS在Ruby代码上的性能。 在这项工作中,我们分析了一百多个预训练和微调模型。 Our results show that 1) multilingual PLMs have a lower Performance-to-Time Ratio (the BLEU, METEOR, or MRR scores over the fine-tuning duration) as compared to monolingual PLMs, 2) our proposed strategy to select target programming languages to fine-tune multilingual PLMs is effective: it reduces the time to fine-tune yet achieves higher performance in Code Summarization and Code Search tasks, and 3) our proposed strategy consistently shows good performance on不同的代码长度。

A recent study by Ahmed and Devanbu reported that using a corpus of code written in multilingual datasets to fine-tune multilingual Pre-trained Language Models (PLMs) achieves higher performance as opposed to using a corpus of code written in just one programming language. However, no analysis was made with respect to fine-tuning monolingual PLMs. Furthermore, some programming languages are inherently different and code written in one language usually cannot be interchanged with the others, i.e., Ruby and Java code possess very different structure. To better understand how monolingual and multilingual PLMs affect different programming languages, we investigate 1) the performance of PLMs on Ruby for two popular Software Engineering tasks: Code Summarization and Code Search, 2) the strategy (to select programming languages) that works well on fine-tuning multilingual PLMs for Ruby, and 3) the performance of the fine-tuned PLMs on Ruby given different code lengths. In this work, we analyze over a hundred of pre-trained and fine-tuned models. Our results show that 1) multilingual PLMs have a lower Performance-to-Time Ratio (the BLEU, METEOR, or MRR scores over the fine-tuning duration) as compared to monolingual PLMs, 2) our proposed strategy to select target programming languages to fine-tune multilingual PLMs is effective: it reduces the time to fine-tune yet achieves higher performance in Code Summarization and Code Search tasks, and 3) our proposed strategy consistently shows good performance on different code lengths.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源