论文标题

深层自适应语义逻辑(DASL):将声明性知识汇编为深神经网络

Deep Adaptive Semantic Logic (DASL): Compiling Declarative Knowledge into Deep Neural Networks

论文作者

Sikka, Karan, Silberfarb, Andrew, Byrnes, John, Sur, Indranil, Chow, Ed, Divakaran, Ajay, Rohwer, Richard

论文摘要

我们介绍了深层自适应语义逻辑(DASL),这是一个新颖的框架,用于自动化深度神经网络的产生,该框架结合了用户提供的正式知识以改善数据的学习。我们提供正式的语义,这些语义表明我们的知识表示捕获了所有一阶逻辑,并且来自无限域中的有限采样会收敛到纠正真实价值。 DASL的代表性通过避免消失的梯度,使更深层的逻辑结构以及能够在知识和学习组成部分之间进行更丰富的相互作用,从而改善了先前的神经符号工作。我们通过一个玩具问题说明了DASL,在该问题中,我们将结构添加到图像分类问题中,并证明该结构的知识可将数据要求减少1000美元。然后,我们在视觉关系检测任务上评估DASL,并证明在数据稀缺的设置中,增加常识性知识可将绩效提高$ 10.7 \%$。

We introduce Deep Adaptive Semantic Logic (DASL), a novel framework for automating the generation of deep neural networks that incorporates user-provided formal knowledge to improve learning from data. We provide formal semantics that demonstrate that our knowledge representation captures all of first order logic and that finite sampling from infinite domains converges to correct truth values. DASL's representation improves on prior neural-symbolic work by avoiding vanishing gradients, allowing deeper logical structure, and enabling richer interactions between the knowledge and learning components. We illustrate DASL through a toy problem in which we add structure to an image classification problem and demonstrate that knowledge of that structure reduces data requirements by a factor of $1000$. We then evaluate DASL on a visual relationship detection task and demonstrate that the addition of commonsense knowledge improves performance by $10.7\%$ in a data scarce setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源