论文标题

了解深度神经网络上的布尔功能可学习性:PAC学习符合神经符号模型

Understanding Boolean Function Learnability on Deep Neural Networks: PAC Learning Meets Neurosymbolic Models

论文作者

Nicolau, Marcio, Tavares, Anderson R., Zhang, Zhiwei, Avelar, Pedro, Flach, João M., Lamb, Luis C., Vardi, Moshe Y.

论文摘要

计算学习理论指出,在多项式时间内,许多类别的布尔公式都是可以学习的。本文介绍了如何通过深层神经网络来学到这种公式的研究对象。具体而言,我们分析了与模型采样基准,组合优化问题和随机3-CNF相关的布尔公式,并具有不同程度的约束度。我们的实验表明:(i)神经学习比纯粹的基于规则的系统和纯符号方法更好; (ii)相对较小且浅的神经网络是与组合优化问题相关的公式的很好的近似值; (iii)较小的公式似乎很难学习,这可能是由于可用的积极(令人满意的)示例较少; (iv)有趣的是,与过度约束的3-CNF公式相比,学习不足的3-CNF公式更具挑战性。此类发现为更好地理解,构建和使用可解释的神经符号AI方法铺平了道路。

Computational learning theory states that many classes of boolean formulas are learnable in polynomial time. This paper addresses the understudied subject of how, in practice, such formulas can be learned by deep neural networks. Specifically, we analyze boolean formulas associated with model-sampling benchmarks, combinatorial optimization problems, and random 3-CNFs with varying degrees of constrainedness. Our experiments indicate that: (i) neural learning generalizes better than pure rule-based systems and pure symbolic approach; (ii) relatively small and shallow neural networks are very good approximators of formulas associated with combinatorial optimization problems; (iii) smaller formulas seem harder to learn, possibly due to the fewer positive (satisfying) examples available; and (iv) interestingly, underconstrained 3-CNF formulas are more challenging to learn than overconstrained ones. Such findings pave the way for a better understanding, construction, and use of interpretable neurosymbolic AI methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源