论文标题
使用对比度学习的二进制代码的预训练表示
Pre-Training Representations of Binary Code Using Contrastive Learning
论文作者
论文摘要
二进制代码分析和理解对于不可用源代码的反向工程和计算机安全任务中的应用程序至关重要。不幸的是,与源代码不同,二进制代码缺乏语义,对于人类工程师来说,二进制代码更难理解和分析。在本文中,我们提出了一种对比的学习技术,该技术将源代码和评论信息与二进制文件集成在一起,以创建能够协助二进制分析和理解任务的嵌入式。具体而言,我们在Contrabin中提出了三个组件:(1)一种用于初始预训练的主要对比学习方法,(2)一种单纯化插值方法,用于整合源代码,注释和二进制代码,以及(3)中间表示学习算法以训练二进制代码嵌入。我们进一步分析了人写的和综合评论对二进制代码理解任务的影响,从而揭示了巨大的绩效差异。虽然合成评论提供了很大的好处,但发现人写的评论引入了噪音,甚至与不使用评论相比,甚至导致性能下降。这些发现重塑了围绕评论类型在二进制代码分析中的作用的叙述。我们通过与二进制代码相关的四个指示性下游任务评估违禁品的有效性:算法功能分类,功能名称恢复,代码摘要和逆向工程。结果表明,违禁品可以大大提高所有四个任务的性能,这些任务是通过准确性,平均精度和BLEU得分来衡量的。 Contrabin是将源代码,二进制代码和评论纳入对比代码表示学习中的第一语言表示模型,并旨在为二进制代码分析的领域做出贡献。本研究中使用的数据集可用于进一步研究。
Binary code analysis and comprehension is critical to applications in reverse engineering and computer security tasks where source code is not available. Unfortunately, unlike source code, binary code lacks semantics and is more difficult for human engineers to understand and analyze. In this paper, we present ContraBin, a contrastive learning technique that integrates source code and comment information along with binaries to create an embedding capable of aiding binary analysis and comprehension tasks. Specifically, we present three components in ContraBin: (1) a primary contrastive learning method for initial pre-training, (2) a simplex interpolation method to integrate source code, comments, and binary code, and (3) an intermediate representation learning algorithm to train a binary code embedding. We further analyze the impact of human-written and synthetic comments on binary code comprehension tasks, revealing a significant performance disparity. While synthetic comments provide substantial benefits, human-written comments are found to introduce noise, even resulting in performance drops compared to using no comments. These findings reshape the narrative around the role of comment types in binary code analysis. We evaluate the effectiveness of ContraBin through four indicative downstream tasks related to binary code: algorithmic functionality classification, function name recovery, code summarization, and reverse engineering. The results show that ContraBin considerably improves performance on all four tasks, measured by accuracy, mean of average precision, and BLEU scores as appropriate. ContraBin is the first language representation model to incorporate source code, binary code, and comments into contrastive code representation learning and is intended to contribute to the field of binary code analysis. The dataset used in this study is available for further research.