论文标题

减少了神经符号学习的含义偏见逻辑损失

Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning

论文作者

He, Haoyuan, Dai, Wangzhou, Li, Ming

论文摘要

通过将逻辑推理与可区分的操作员近似逻辑推理来整合逻辑推理和机器学习是神经符号系统中广泛使用的技术。 但是,一些可区分的操作员在反向传播过程中可能带来很大的偏见,并降低神经符号学习的表现。 在本文中,我们揭示了这种偏见,称为\ textit {含义偏见}在源自模糊逻辑运算符的损失函数中很常见。 此外,我们提出了一种简单而有效的方法,将偏见的损失函数转换为\ textit {减少含义偏见逻辑损失(RILL)}以解决上述问题。 经验研究表明,与偏见的逻辑损失函数相比,RILL可以实现重大改进,尤其是当知识库不完整时,并且在标记数据不足时比较比较的方法更强大。

Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in Neuro-Symbolic systems. However, some differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning. In this paper, we reveal that this bias, named \textit{Implication Bias} is common in loss functions derived from fuzzy logic operators. Furthermore, we propose a simple yet effective method to transform the biased loss functions into \textit{Reduced Implication-bias Logic Loss (RILL)} to address the above problem. Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions, especially when the knowledge base is incomplete, and keeps more robust than the compared methods when labelled data is insufficient.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源