论文标题
订购的子图聚合网络
Ordered Subgraph Aggregation Networks
论文作者
论文摘要
最近出现了许多子图增强图神经网络(GNN),可证明增强了标准(消息通话)GNN的表现力。但是,对这些方法之间的相互关系和Weisfeiler-Lean层次结构的关系有限。此外,当前的方法要么使用给定尺寸的所有子图,要随机均匀地对其进行采样,或者使用手工制作的启发式方法,而不是学习以数据驱动的方式选择子图。在这里,我们提供了一种统一的方法来研究此类体系结构,通过引入理论框架并扩展了亚图增强的GNN的已知表达结果。具体而言,我们表明,增加的子图尺寸始终增加表达能力,并通过将它们与已建立的$ k \ text { - } \ Mathsf {Wl} $ hiersarchy联系起来,从而更好地理解其局限性。此外,我们还使用最新的方法通过复杂的离散概率分布来反向传播来探索学习子图的不同方法。从经验上讲,我们研究了不同子图增强的GNN的预测性能,这表明与非DATA驱动的亚图增强的图形神经网络相比,我们的数据驱动的体系结构提高了标准基准数据集的预测准确性,同时缩短了计算时间。
Numerous subgraph-enhanced graph neural networks (GNNs) have emerged recently, provably boosting the expressive power of standard (message-passing) GNNs. However, there is a limited understanding of how these approaches relate to each other and to the Weisfeiler-Leman hierarchy. Moreover, current approaches either use all subgraphs of a given size, sample them uniformly at random, or use hand-crafted heuristics instead of learning to select subgraphs in a data-driven manner. Here, we offer a unified way to study such architectures by introducing a theoretical framework and extending the known expressivity results of subgraph-enhanced GNNs. Concretely, we show that increasing subgraph size always increases the expressive power and develop a better understanding of their limitations by relating them to the established $k\text{-}\mathsf{WL}$ hierarchy. In addition, we explore different approaches for learning to sample subgraphs using recent methods for backpropagating through complex discrete probability distributions. Empirically, we study the predictive performance of different subgraph-enhanced GNNs, showing that our data-driven architectures increase prediction accuracy on standard benchmark datasets compared to non-data-driven subgraph-enhanced graph neural networks while reducing computation time.