论文标题

敏捷的GNN嵌入张量 - 训练分解

Nimble GNN Embedding with Tensor-Train Decomposition

论文作者

Yin, Chunxing, Zheng, Da, Nisa, Israt, Faloutos, Christos, Karypis, George, Vuduc, Richard

论文摘要

本文介绍了一种通过张量 - 训练(TT)分解来更紧凑地表示图形神经网络(GNN)表的新方法。我们考虑(a)缺乏节点特征的图形数据,从而在训练过程中学习嵌入的情况; (b)我们希望利用GPU平台,即使对于大型内存GPU,也需要较小的桌子来减少主机到GPU的通信。 TT的使用实现了嵌入的紧凑参数化,使其足够小,即使在大量图中也完全适合现代GPU。当与明智的初始化和分层图分区结合使用时,这种方法可以在大型公开可用的基准数据集中将节点嵌入向量的大小降低1,659次,至81,362次,从而实现可比较或更高的精度和更高的精度和多GPU系统上的重要速度。在某些情况下,我们的模型在输入上没有明确的节点功能甚至可以匹配使用节点功能的模型的准确性。

This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition. We consider the scenario where (a) the graph data that lack node features, thereby requiring the learning of embeddings during training; and (b) we wish to exploit GPU platforms, where smaller tables are needed to reduce host-to-GPU communication even for large-memory GPUs. The use of TT enables a compact parameterization of the embedding, rendering it small enough to fit entirely on modern GPUs even for massive graphs. When combined with judicious schemes for initialization and hierarchical graph partitioning, this approach can reduce the size of node embedding vectors by 1,659 times to 81,362 times on large publicly available benchmark datasets, achieving comparable or better accuracy and significant speedups on multi-GPU systems. In some cases, our model without explicit node features on input can even match the accuracy of models that use node features.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源