论文标题
SMGRL:可扩展的多分辨率图表示学习
SMGRL: Scalable Multi-resolution Graph Representation Learning
论文作者
论文摘要
图形卷积网络(GCN)使我们能够学习拓扑意识的节点嵌入,这对于分类或链接预测很有用。但是,他们无法捕获节点之间的远距离依赖性,而不会添加其他层,这又导致过度光滑并增加了时间和空间复杂性。此外,节点之间的复杂依赖性使小批量挑战性挑战,从而将其适用性限制在大图中。我们提出了一个可扩展的多分辨率图表学习(SMGRL)框架,使我们能够有效地学习多分辨率节点嵌入。我们的框架是模型不合时宜的,可以应用于任何现有的GCN模型。我们仅通过减少原始图的尺寸降低来降低培训成本,然后利用自相似性以在多个分辨率下应用所得算法。可以将所得的多分辨率嵌入聚合以产生捕获长范围和短距离依赖性的高质量节点嵌入。我们的实验表明,这导致了提高的分类准确性,而不会产生高计算成本。
Graph convolutional networks (GCNs) allow us to learn topologically-aware node embeddings, which can be useful for classification or link prediction. However, they are unable to capture long-range dependencies between nodes without adding additional layers -- which in turn leads to over-smoothing and increased time and space complexity. Further, the complex dependencies between nodes make mini-batching challenging, limiting their applicability to large graphs. We propose a Scalable Multi-resolution Graph Representation Learning (SMGRL) framework that enables us to learn multi-resolution node embeddings efficiently. Our framework is model-agnostic and can be applied to any existing GCN model. We dramatically reduce training costs by training only on a reduced-dimension coarsening of the original graph, then exploit self-similarity to apply the resulting algorithm at multiple resolutions. The resulting multi-resolution embeddings can be aggregated to yield high-quality node embeddings that capture both long- and short-range dependencies. Our experiments show that this leads to improved classification accuracy, without incurring high computational costs.