论文标题
具有单跳线模型的简单有效的图形自动编码器
Simple and Effective Graph Autoencoders with One-Hop Linear Models
论文作者
论文摘要
在过去的几年中,Graph AutoCododers(AE)和变异自动编码器(VAE)作为功能强大的节点嵌入方法出现,并且在诸如链接预测和节点群集等具有挑战性的任务上具有有希望的性能。图AE,VAE及其大多数扩展依赖于多层图卷积网络(GCN)编码器来学习节点的矢量空间表示。在本文中,我们表明,对于许多应用程序,GCN编码器实际上是不必要的复杂的。我们建议通过明显简单,更容易解释的线性模型W.R.T.取代它们。图形的直接邻域(单跳)邻接矩阵,涉及更少的操作,较少的参数和无激活函数。对于上述两个任务,我们表明这种更简单的方法始终达到竞争性能W.R.T.基于GCN的图AE和VAE用于众多现实图形,包括通常用于评估图AE和VAE的所有基准数据集。基于这些结果,我们还质疑反复使用这些数据集比较复杂图AE和VAE的相关性。
Over the last few years, graph autoencoders (AE) and variational autoencoders (VAE) emerged as powerful node embedding methods, with promising performances on challenging tasks such as link prediction and node clustering. Graph AE, VAE and most of their extensions rely on multi-layer graph convolutional networks (GCN) encoders to learn vector space representations of nodes. In this paper, we show that GCN encoders are actually unnecessarily complex for many applications. We propose to replace them by significantly simpler and more interpretable linear models w.r.t. the direct neighborhood (one-hop) adjacency matrix of the graph, involving fewer operations, fewer parameters and no activation function. For the two aforementioned tasks, we show that this simpler approach consistently reaches competitive performances w.r.t. GCN-based graph AE and VAE for numerous real-world graphs, including all benchmark datasets commonly used to evaluate graph AE and VAE. Based on these results, we also question the relevance of repeatedly using these datasets to compare complex graph AE and VAE.