论文标题

Kergnns:具有图内核的可解释的图形神经网络

KerGNNs: Interpretable Graph Neural Networks with Graph Kernels

论文作者

Feng, Aosong, You, Chenyu, Wang, Shiqiang, Tassiulas, Leandros

论文摘要

从历史上看,图形内核是用于图形分类任务的最广泛使用的技术。但是,由于图形的手工制作的组合特征,这些方法的性能有限。近年来,由于其出色的性能,图形神经网络(GNN)已成为下游图相关任务的最新方法。大多数GNN基于消息传递神经网络(MPNN)框架。但是,最近的研究表明,在图同构测试中,MPNN不能超过Weisfeiler-Lehman(WL)算法的功能。为了解决现有图形内核和GNN方法的局限性,在本文中,我们提出了一个新颖的GNN框架,称为\ textIt {kernel graph neural nevernets}(kergnns),该框架将图形内核集成到GNNS的消息传递过程中。受卷积神经网络(CNN)中的卷积过滤器的启发,Kergnns采用可训练的隐藏图作为图形过滤器,与子图相结合以使用图内核更新节点嵌入。此外,我们表明可以将MPNN视为kergnns的特殊情况。我们将KERGNN应用于多个与图形相关的任务,并使用交叉验证与基准进行公平的比较。我们表明,与现有的最新方法相比,我们的方法实现了竞争性能,这表明有可能提高GNN的表示能力。我们还表明,KERGNN中训练的图形过滤器可以揭示数据集的局部图结构,这与常规GNN模型相比可显着提高模型的可解释性。

Graph kernels are historically the most widely-used technique for graph classification tasks. However, these methods suffer from limited performance because of the hand-crafted combinatorial features of graphs. In recent years, graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks due to their superior performance. Most GNNs are based on Message Passing Neural Network (MPNN) frameworks. However, recent studies show that MPNNs can not exceed the power of the Weisfeiler-Lehman (WL) algorithm in graph isomorphism test. To address the limitations of existing graph kernel and GNN methods, in this paper, we propose a novel GNN framework, termed \textit{Kernel Graph Neural Networks} (KerGNNs), which integrates graph kernels into the message passing process of GNNs. Inspired by convolution filters in convolutional neural networks (CNNs), KerGNNs adopt trainable hidden graphs as graph filters which are combined with subgraphs to update node embeddings using graph kernels. In addition, we show that MPNNs can be viewed as special cases of KerGNNs. We apply KerGNNs to multiple graph-related tasks and use cross-validation to make fair comparisons with benchmarks. We show that our method achieves competitive performance compared with existing state-of-the-art methods, demonstrating the potential to increase the representation ability of GNNs. We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源