论文标题
糖:通过资源感知图分区的有效次级级培训
SUGAR: Efficient Subgraph-level Training via Resource-aware Graph Partitioning
论文作者
论文摘要
图形神经网络(GNN)在各种基于图的应用程序(例如推荐系统,药物发现和对象识别)中表现出了巨大的潜力。然而,尽管资源有效的GNN学习是一个很少探索的话题,尽管它在边缘计算和物联网(IoT)应用程序方面有很多好处。为了改善这种情况,这项工作提出了通过资源感知图形分区(Sugar)进行有效的子图级培训。糖将初始图将初始图划分为一组不相交的子图,然后在子图级上进行本地培训。我们提供理论分析并在五个图基准上进行广泛的实验,以验证其在实践中的疗效。我们的结果表明,糖在大规模图上最多可以达到33倍的运行时加速和减少记忆力的3.8倍。我们认为,糖为开发资源有效的GNN方法开辟了一个新的研究方向,因此适合物联网部署。
Graph Neural Networks (GNNs) have demonstrated a great potential in a variety of graph-based applications, such as recommender systems, drug discovery, and object recognition. Nevertheless, resource-efficient GNN learning is a rarely explored topic despite its many benefits for edge computing and Internet of Things (IoT) applications. To improve this state of affairs, this work proposes efficient subgraph-level training via resource-aware graph partitioning (SUGAR). SUGAR first partitions the initial graph into a set of disjoint subgraphs and then performs local training at the subgraph-level. We provide a theoretical analysis and conduct extensive experiments on five graph benchmarks to verify its efficacy in practice. Our results show that SUGAR can achieve up to 33 times runtime speedup and 3.8 times memory reduction on large-scale graphs. We believe SUGAR opens a new research direction towards developing GNN methods that are resource-efficient, hence suitable for IoT deployment.