论文标题

在端到端的可区分证明中学习推理策略

Learning Reasoning Strategies in End-to-End Differentiable Proving

论文作者

Minervini, Pasquale, Riedel, Sebastian, Stenetorp, Pontus, Grefenstette, Edward, Rocktäschel, Tim

论文摘要

试图使深度学习模型可解释,数据效率和健壮的尝试通过与基于规则的系统(例如,在神经定理掠夺(NTPS)中)进行了一些成功。这些神经符号模型可以通过反向传播诱导可解释的规则并从数据中学习表示形式,同时为其预测提供逻辑说明。但是,它们受到计算复杂性的限制,因为他们需要考虑所有可能的证明路径来解释目标,从而使它们不适合大规模应用。我们提出条件定理掠夺(CTPS),这是NTPS的扩展,该扩展通过基于梯度的优化来学习最佳规则选择策略。我们表明,CTP是可扩展的,并且在CLUTRR数据集上产生最先进的结果,该数据集可以通过学习对较小的图表进行推理并评估较大图表来测试神经模型的系统概括。最后,与其他神经符号模型相比,CTP在标准基准测试中显示了更好的链接预测结果,同时可以解释。所有源代码和数据集可在线访问https://github.com/uclnlp/ctp。

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. All source code and datasets are available online, at https://github.com/uclnlp/ctp.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源