论文标题
信仰传播神经网络
Belief Propagation Neural Networks
论文作者
论文摘要
学到的神经求解器已成功地用于解决组合优化和决策问题。然而,这些问题的更一般计数变体仍然在很大程度上用手工制作的求解器来解决。为了弥合这一差距,我们引入了信仰传播神经网络(BPNNS),这是一类参数化的操作员,它们在因子图上运行并概括了信念传播(BP)。以最严格的形式,BPNN层(BPNN-D)是一种学识渊博的迭代操作员,可证明可以在任何选择参数方面维护BP的许多理想特性。从经验上讲,我们表明,通过训练BPNN-D学会比原始BP更好地执行任务:它在ISING模型上的收敛速度更快,同时提供更紧密的界限。在具有挑战性的模型计数问题上,BPNN计算估计比最先进的手工制作方法快100倍,同时返回了可比质量的估计值。
Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameterized operators that operate on factor graphs and generalize Belief Propagation (BP). In its strictest form, a BPNN layer (BPNN-D) is a learned iterative operator that provably maintains many of the desirable properties of BP for any choice of the parameters. Empirically, we show that by training BPNN-D learns to perform the task better than the original BP: it converges 1.7x faster on Ising models while providing tighter bounds. On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods, while returning an estimate of comparable quality.