论文标题

图形的变压器:从架构角度来看的概述

Transformer for Graphs: An Overview from Architecture Perspective

论文作者

Min, Erxue, Chen, Runfa, Bian, Yatao, Xu, Tingyang, Zhao, Kangfei, Huang, Wenbing, Zhao, Peilin, Huang, Junzhou, Ananiadou, Sophia, Rong, Yu

论文摘要

最近,在许多人工智能领域取得了巨大成功的变形金刚模型,它在建模图形结构数据中表现出了巨大的潜力。到目前为止,已经提出了各种各样的变压器来适应图形结构化数据。但是,对于这些图形的这些变压器变体的全面文献综述和系统评估仍然不可用。必须整理出图形的现有变压器模型,并系统地研究其对各种图形任务的有效性。在这项调查中,我们从建筑设计的角度对各种图形变压器模型进行了全面评论。我们首先要拆卸现有模型,并结论三种将图形信息纳入香草变压器的典型方法:1)GNNS作为辅助模块,2)改进图形的位置嵌入,3)改进了图形的注意力矩阵。此外,我们在三个组中实施了代表性组件,并对各种著名的图形数据基准进行了全面比较,以研究每个组件的实际绩效增长。我们的实验证实了当前特定图模块在变压器上的好处,并在各种图形任务上揭示了它们的优势。

Recently, Transformer model, which has achieved great success in many artificial intelligence fields, has demonstrated its great potential in modeling graph-structured data. Till now, a great variety of Transformers has been proposed to adapt to the graph-structured data. However, a comprehensive literature review and systematical evaluation of these Transformer variants for graphs are still unavailable. It's imperative to sort out the existing Transformer models for graphs and systematically investigate their effectiveness on various graph tasks. In this survey, we provide a comprehensive review of various Graph Transformer models from the architectural design perspective. We first disassemble the existing models and conclude three typical ways to incorporate the graph information into the vanilla Transformer: 1) GNNs as Auxiliary Modules, 2) Improved Positional Embedding from Graphs, and 3) Improved Attention Matrix from Graphs. Furthermore, we implement the representative components in three groups and conduct a comprehensive comparison on various kinds of famous graph data benchmarks to investigate the real performance gain of each component. Our experiments confirm the benefits of current graph-specific modules on Transformer and reveal their advantages on different kinds of graph tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源