论文标题

可学习的图形卷积注意网络

Learnable Graph Convolutional Attention Networks

论文作者

Javaloy, Adrián, Sanchez-Martin, Pablo, Levi, Amit, Valera, Isabel

论文摘要

现有图形神经网络(GNNS)通过统一(卷动)所有相邻节点的特征,或通过将不均匀分数(参加)应用于特征来计算节点之间的消息交换。最近的作品表明,GNN架构分别是GCN和GAT的优点和缺点。在这项工作中,我们旨在利用这两种方法的全部优势。为此,我们首先介绍图形卷积注意力层(CAT),该图层依赖于计算注意力评分的卷积。不幸的是,与GCN和GAT一样,我们表明,这三个(理论上或实践中都没有)在其性能直接取决于数据的性质(即图形和功能)之间没有明显的赢家。这个结果使我们获得了工作的主要贡献,即可学习的图形卷积注意网络(L-CAT):一种GNN体系结构,仅通过添加两个标量参数来自动在每一层中自动插入GCN,GAT和CAT之间。我们的结果表明,L-CAT能够有效地将不同的GNN层沿网络组合,在广泛的数据集中优于竞争方法,并产生更强大的模型,从而减少了交叉验证的需求。

Existing Graph Neural Networks (GNNs) compute the message exchange between nodes by either aggregating uniformly (convolving) the features of all the neighboring nodes, or by applying a non-uniform score (attending) to the features. Recent works have shown the strengths and weaknesses of the resulting GNN architectures, respectively, GCNs and GATs. In this work, we aim at exploiting the strengths of both approaches to their full extent. To this end, we first introduce the graph convolutional attention layer (CAT), which relies on convolutions to compute the attention scores. Unfortunately, as in the case of GCNs and GATs, we show that there exists no clear winner between the three (neither theoretically nor in practice) as their performance directly depends on the nature of the data (i.e., of the graph and features). This result brings us to the main contribution of our work, the learnable graph convolutional attention network (L-CAT): a GNN architecture that automatically interpolates between GCN, GAT and CAT in each layer, by adding only two scalar parameters. Our results demonstrate that L-CAT is able to efficiently combine different GNN layers along the network, outperforming competing methods in a wide range of datasets, and resulting in a more robust model that reduces the need of cross-validating.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源