论文标题
Binbert:使用微调和执行感知的变压器理解二进制代码
BinBert: Binary Code Understanding with a Fine-tunable and Execution-aware Transformer
论文作者
论文摘要
二进制代码分析的最新趋势促进了基于教学嵌入模型的神经解决方案的使用。指令嵌入模型是一个神经网络,将汇编指令序列转换为嵌入向量。如果对嵌入式网络进行了训练,从而使从代码到向量的翻译部分保留了语义,则该网络有效地代表了汇编代码模型。 在本文中,我们提出了新颖的装配代码模型Binbert。 Binbert建立在汇编指令序列和符号执行信息的庞大数据集中的预训练的变压器上。 Binbert可以应用于汇编指令序列,并且可以微调,即可以作为特定于任务数据的神经体系结构的一部分进行重新训练。通过微调,Binbert学习了如何将获得预培训获得的通用知识应用于特定任务。 我们根据多任务基准评估了Binbert,我们专门设计了用于测试组装代码的理解。基准是由几个任务组成的,其中一些是从文献中获得的,以及我们设计的一些新任务,并结合了内在和下游任务。 我们的结果表明,Binbert的表现优于二进制指令嵌入的最先进模型,从而提高了二进制代码理解的标准。
A recent trend in binary code analysis promotes the use of neural solutions based on instruction embedding models. An instruction embedding model is a neural network that transforms sequences of assembly instructions into embedding vectors. If the embedding network is trained such that the translation from code to vectors partially preserves the semantic, the network effectively represents an assembly code model. In this paper we present BinBert, a novel assembly code model. BinBert is built on a transformer pre-trained on a huge dataset of both assembly instruction sequences and symbolic execution information. BinBert can be applied to assembly instructions sequences and it is fine-tunable, i.e. it can be re-trained as part of a neural architecture on task-specific data. Through fine-tuning, BinBert learns how to apply the general knowledge acquired with pre-training to the specific task. We evaluated BinBert on a multi-task benchmark that we specifically designed to test the understanding of assembly code. The benchmark is composed of several tasks, some taken from the literature, and a few novel tasks that we designed, with a mix of intrinsic and downstream tasks. Our results show that BinBert outperforms state-of-the-art models for binary instruction embedding, raising the bar for binary code understanding.