论文标题
图神经网络是动态的程序员
Graph Neural Networks are Dynamic Programmers
论文作者
论文摘要
算法比对的概念支持了图神经网络(GNN)的神经算法推理的最新进展。从广义上讲,如果神经网络的各个组件与目标算法很好地对齐,则神经网络将更好地学习执行推理任务(就样本复杂性而言)。具体而言,据称GNN与Dynamic编程(DP)保持一致,DYNIC编程(DP)是一种一般的问题解决策略,表达了许多多项式时间算法。但是,是否真的证明了这种对齐方式并在理论上量化了?在这里,我们使用类别理论和抽象代数的方法表明,GNNS和DP之间存在复杂的联系,远远超出了对单个算法(例如Bellman-Ford)的最初观察结果。揭示这一联系,我们可以轻松地验证文献中的几个先前发现,为以边缘为中心的任务生成更好的GNN架构,并在CLRS算法推理基准上展示经验结果。我们希望我们的博览会将成为建立更强算法对齐GNNS的基础。
Recent advances in neural algorithmic reasoning with graph neural networks (GNNs) are propped up by the notion of algorithmic alignment. Broadly, a neural network will be better at learning to execute a reasoning task (in terms of sample complexity) if its individual components align well with the target algorithm. Specifically, GNNs are claimed to align with dynamic programming (DP), a general problem-solving strategy which expresses many polynomial-time algorithms. However, has this alignment truly been demonstrated and theoretically quantified? Here we show, using methods from category theory and abstract algebra, that there exists an intricate connection between GNNs and DP, going well beyond the initial observations over individual algorithms such as Bellman-Ford. Exposing this connection, we easily verify several prior findings in the literature, produce better-grounded GNN architectures for edge-centric tasks, and demonstrate empirical results on the CLRS algorithmic reasoning benchmark. We hope our exposition will serve as a foundation for building stronger algorithmically aligned GNNs.