论文标题

AGL:一种可扩展的工业用途的系统学习系统

AGL: a Scalable System for Industrial-purpose Graph Machine Learning

论文作者

Zhang, Dalong, Huang, Xin, Liu, Ziqi, Hu, Zhiyang, Song, Xianzheng, Ge, Zhibang, Zhang, Zhiqiang, Wang, Lin, Zhou, Jun, Shuang, Yang, Qi, Yuan

论文摘要

图形上的机器学习一直是用于图形数据的强大学习工具。但是,工业社区要利用这些技术(例如图形神经网络(GNN)),并由于图中的固有数据依赖性而大规模解决现实世界中的问题是一项挑战。因此,我们不能简单地用经典学习系统训练GNN,例如,假设数据并行的参数服务器。现有系统将快速访问的内存中图数据存储在单台计算机或远程图表中。主要缺点分为三倍。首先,由于对内存音量的限制或图表存储和工人之间的带宽无法扩展。其次,它们需要额外开发图存储,而无需很好地利用成熟的基础架构,例如MapReduce,以保证良好的系统属性。第三,他们专注于训练,但忽略了对图的推理的优化,因此使它们成为一个不整合的系统。 在本文中,我们设计了AGL,这是一种可扩展的,容忍和集成的系统,对GNN进行了完整的培训和推断。我们的系统设计遵循GNN计算基础的消息传递方案。我们设计以生成$ k $ - 霍普社区,这是每个节点的信息完成子图,以及通过合并从边缘邻居的值并通过MapReduce将值与超边邻居传播的值来简单地进行推理。此外,$ k $ -HOP社区包含每个节点的信息完成子图,因此,由于数据独立性,我们只是对参数服务器进行培训。我们的系统AGL在成熟的基础架构上实施,可以在图形上完成2层图形注意网络的培训,该图形网络具有数十亿个节点和14小时内的数十亿个边缘,并在1.2小时内完成推断。

Machine learning over graphs have been emerging as powerful learning tools for graph data. However, it is challenging for industrial communities to leverage the techniques, such as graph neural networks (GNNs), and solve real-world problems at scale because of inherent data dependency in the graphs. As such, we cannot simply train a GNN with classic learning systems, for instance parameter server that assumes data parallel. Existing systems store the graph data in-memory for fast accesses either in a single machine or graph stores from remote. The major drawbacks are in three-fold. First, they cannot scale because of the limitations on the volume of the memory, or the bandwidth between graph stores and workers. Second, they require extra development of graph stores without well exploiting mature infrastructures such as MapReduce that guarantee good system properties. Third, they focus on training but ignore the optimization of inference over graphs, thus makes them an unintegrated system. In this paper, we design AGL, a scalable, fault-tolerance and integrated system, with fully-functional training and inference for GNNs. Our system design follows the message passing scheme underlying the computations of GNNs. We design to generate the $k$-hop neighborhood, an information-complete subgraph for each node, as well as do the inference simply by merging values from in-edge neighbors and propagating values to out-edge neighbors via MapReduce. In addition, the $k$-hop neighborhood contains information-complete subgraphs for each node, thus we simply do the training on parameter servers due to data independency. Our system AGL, implemented on mature infrastructures, can finish the training of a 2-layer graph attention network on a graph with billions of nodes and hundred billions of edges in 14 hours, and complete the inference in 1.2 hour.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源