论文标题
图形神经网络可以生成什么功能?
What Functions Can Graph Neural Networks Generate?
论文作者
论文摘要
在本文中,我们通过图形函数上的关键代数条件(称为\ textit {置换兼容性})完全回答了上述问题,该函数将图形和图形的特征与功能约束相关联。我们证明:(i)GNN作为图形函数,必然是兼容的置换; (ii)相反,当限制具有不同节点特征的输入图上时,任何排列兼容函数都可以由GNN生成; (iii)对于任意节点特征(不一定是不同),一个简单的特征增强方案足以生成GNN置换兼容函数; (iv)可以通过仅检查二次功能约束,而不是对所有排列的详尽搜索来验证置换兼容性; (v)GNN可以生成\ textIt {any}图形函数,一旦我们增强了具有节点身份的节点特征,从而超越了图等法和置换兼容性。上面的表征铺平了正式研究GNN和其他算法程序之间复杂联系的路径。例如,我们的表征意味着许多自然图问题,例如最小值值,最大流量值,最大值大小和最短路径,可以使用简单的功能增强来生成。相比之下,每当GNN无法生成具有相同特征的置换函数时,著名的Weisfeiler-Lehman图形测试都会失败。我们分析的核心是一种新的表示定理,它标识了GNN的基础函数。这使我们能够将目标图函数的属性转化为GNN聚合函数的属性。
In this paper, we fully answer the above question through a key algebraic condition on graph functions, called \textit{permutation compatibility}, that relates permutations of weights and features of the graph to functional constraints. We prove that: (i) a GNN, as a graph function, is necessarily permutation compatible; (ii) conversely, any permutation compatible function, when restricted on input graphs with distinct node features, can be generated by a GNN; (iii) for arbitrary node features (not necessarily distinct), a simple feature augmentation scheme suffices to generate a permutation compatible function by a GNN; (iv) permutation compatibility can be verified by checking only quadratically many functional constraints, rather than an exhaustive search over all the permutations; (v) GNNs can generate \textit{any} graph function once we augment the node features with node identities, thus going beyond graph isomorphism and permutation compatibility. The above characterizations pave the path to formally study the intricate connection between GNNs and other algorithmic procedures on graphs. For instance, our characterization implies that many natural graph problems, such as min-cut value, max-flow value, max-clique size, and shortest path can be generated by a GNN using a simple feature augmentation. In contrast, the celebrated Weisfeiler-Lehman graph-isomorphism test fails whenever a permutation compatible function with identical features cannot be generated by a GNN. At the heart of our analysis lies a novel representation theorem that identifies basis functions for GNNs. This enables us to translate the properties of the target graph function into properties of the GNN's aggregation function.