论文标题
图形神经网络的图形自适应激活功能
Graph-Adaptive Activation Functions for Graph Neural Networks
论文作者
论文摘要
激活功能在图神经网络(GNN)中至关重要,因为它们允许定义非线性功能家族以捕获输入图数据及其表示之间的关系。本文提出了GNN的激活函数,不仅可以使图适应非线性,而且还可以分布。为了将特征耦合结合到所有GNN组件中,淋巴结特征是非线性化的,并与一组可训练的参数结合在一起,类似于图形卷积。后者会导致GNN的图形自适应训练非线性组件,可以直接或通过内核变换实现,因此,可以丰富函数类以表示网络数据。无论是直接形式还是内核形式,我们都总是保留置换式符号。我们还证明了图形自适应最大激活功能的子类是稳定的输入扰动。具有分布式源本地化,有限的时间共识,分布式回归和推荐系统的数值实验证实了我们的发现,并显示出与最先进的局部非线性相比,表现出改善的性能。
Activation functions are crucial in graph neural networks (GNNs) as they allow defining a nonlinear family of functions to capture the relationship between the input graph data and their representations. This paper proposes activation functions for GNNs that not only adapt to the graph into the nonlinearity, but are also distributable. To incorporate the feature-topology coupling into all GNN components, nodal features are nonlinearized and combined with a set of trainable parameters in a form akin to graph convolutions. The latter leads to a graph-adaptive trainable nonlinear component of the GNN that can be implemented directly or via kernel transformations, therefore, enriching the class of functions to represent the network data. Whether in the direct or kernel form, we show permutation equivariance is always preserved. We also prove the subclass of graph-adaptive max activation functions are Lipschitz stable to input perturbations. Numerical experiments with distributed source localization, finite-time consensus, distributed regression, and recommender systems corroborate our findings and show improved performance compared with pointwise as well as state-of-the-art localized nonlinearities.