论文标题

图形表示学习的结构感知变压器

Structure-Aware Transformer for Graph Representation Learning

论文作者

Chen, Dexiong, O'Bray, Leslie, Borgwardt, Karsten

论文摘要

变压器架构最近在图表表示学习中引起了人们的注意,因为它自然地通过避免了严格的结构电感偏见来克服图形神经网络(GNN)的几个局限性,而仅通过位置编码来编码图形结构。在这里,我们表明,具有位置编码的变压器生成的节点表示不一定会捕获它们之间的结构相似性。为了解决这个问题,我们提出了结构感知的变压器,这是一类简单而灵活的图形变压器,建立在新的自我发项机制的基础上。这一新的自我发作将结构信息纳入原始自我注意事项中,通过在计算注意力之前提取植根于每个节点的子图表。我们提出了几种自动生成子图表表示的方法,并从理论上说明结果表示至少与子图表一样表现力。从经验上讲,我们的方法在五个图预测基准上实现了最先进的性能。我们的结构感知框架可以利用任何现有的GNN来提取子图表表示,我们表明它系统地改善了相对于基本GNN模型的性能,从而成功地结合了GNN和变形金刚的优势。我们的代码可在https://github.com/borgwardtlab/sat上找到。

The Transformer architecture has gained growing attention in graph representation learning recently, as it naturally overcomes several limitations of graph neural networks (GNNs) by avoiding their strict structural inductive biases and instead only encoding the graph structure via positional encoding. Here, we show that the node representations generated by the Transformer with positional encoding do not necessarily capture structural similarity between them. To address this issue, we propose the Structure-Aware Transformer, a class of simple and flexible graph Transformers built upon a new self-attention mechanism. This new self-attention incorporates structural information into the original self-attention by extracting a subgraph representation rooted at each node before computing the attention. We propose several methods for automatically generating the subgraph representation and show theoretically that the resulting representations are at least as expressive as the subgraph representations. Empirically, our method achieves state-of-the-art performance on five graph prediction benchmarks. Our structure-aware framework can leverage any existing GNN to extract the subgraph representation, and we show that it systematically improves performance relative to the base GNN model, successfully combining the advantages of GNNs and Transformers. Our code is available at https://github.com/BorgwardtLab/SAT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源