论文标题
基准图形神经网络
Benchmarking Graph Neural Networks
论文作者
论文摘要
在过去的几年中,图形神经网络(GNN)已成为用于分析和从图上数据学习的标准工具包。这个新兴领域见证了有希望的技术的广泛增长,这些技术已成功地应用于计算机科学,数学,生物学,物理和化学。但是,要使任何成功的领域成为主流和可靠,必须开发基准以量化进度。这导致我们在2020年3月发行了一个基准框架,i)由多种数学和现实图形组成,ii)ii)使公平模型与相同的参数预算进行比较,以识别关键架构,iii)具有开源,易于使用,易于使用和可重复的代码基础结构以及IV),并适用于新的研究人员进行实验。截至2022年12月,GitHub存储库已经达到了2,000颗恒星和380叉,这通过GNN社区的广泛使用表明了拟议的开源框架的实用性。在本文中,我们介绍了基准的更新版本,并简要介绍了上述框架特征,即类似于流行的锌,但具有现实世界中测量的化学目标,并讨论如何利用该框架来探索新的GNN设计和洞察力。作为基准的价值证明,我们研究了GNN中的图形位置编码(PE)的情况,该案例是通过该基准进行了引入的,此后一直兴趣探索在强大的实验环境中为变压器和GNN探索更强大的PE。
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.