论文标题

自适应图扩散网络

Adaptive Graph Diffusion Networks

论文作者

Sun, Chuxiong, Hu, Jie, Gu, Hongming, Chen, Jinpeng, Yang, Mingchuan

论文摘要

图神经网络(GNN)在图形深度学习域中受到了很多关注。但是,从经验和理论上,最近的研究表明,深度GNN遭受了过度拟合和过度平滑的问题。通常的解决方案无法解决Deep GNN的大量运行时,或者在同一特征空间中限制了图形卷积。我们提出了自适应图扩散网络(AGDN),该网络在具有中等复杂性和运行时的不同特征空间中执行多层广义图扩散。标准图扩散方法将过渡矩阵的大且密集的功率与预定义的加权系数结合在一起。取而代之的是,AGDN将较小的多跳节点表示与可学习的加权系数结合在一起。我们提出了两种可扩展的加权系数机制,以捕获多跳的信息:促进关注(HA)和跳跃卷积(HC)。我们评估了具有半监督节点分类和链接预测任务的多元化,挑战开放图基准(OGB)数据集的AGDN。直到提交日期(2022年8月26日),AGDNS在OGBN-ARXIV,OGBN-蛋白质和OGBL-DDI数据集中实现了TOP-1性能,并且在OGBL-Citater2数据集中获得了TOP-3性能。在类似的Tesla V100 GPU卡上,AGDNS优于可逆的GNNS(REDGNNS),其复杂性为13%,REVGNN在OGBN-Proteins数据集上的培训时间为1%。 AGDN还可以通过36%的训练来实现与密封的可比性能,而OGBL引用2数据集对密封的0.2%推理运行时。

Graph Neural Networks (GNNs) have received much attention in the graph deep learning domain. However, recent research empirically and theoretically shows that deep GNNs suffer from over-fitting and over-smoothing problems. The usual solutions either cannot solve extensive runtime of deep GNNs or restrict graph convolution in the same feature space. We propose the Adaptive Graph Diffusion Networks (AGDNs) which perform multi-layer generalized graph diffusion in different feature spaces with moderate complexity and runtime. Standard graph diffusion methods combine large and dense powers of the transition matrix with predefined weighting coefficients. Instead, AGDNs combine smaller multi-hop node representations with learnable and generalized weighting coefficients. We propose two scalable mechanisms of weighting coefficients to capture multi-hop information: Hop-wise Attention (HA) and Hop-wise Convolution (HC). We evaluate AGDNs on diverse, challenging Open Graph Benchmark (OGB) datasets with semi-supervised node classification and link prediction tasks. Until the date of submission (Aug 26, 2022), AGDNs achieve top-1 performance on the ogbn-arxiv, ogbn-proteins and ogbl-ddi datasets and top-3 performance on the ogbl-citation2 dataset. On the similar Tesla V100 GPU cards, AGDNs outperform Reversible GNNs (RevGNNs) with 13% complexity and 1% training runtime of RevGNNs on the ogbn-proteins dataset. AGDNs also achieve comparable performance to SEAL with 36% training and 0.2% inference runtime of SEAL on the ogbl-citation2 dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源