论文标题

通过深入的关系学习对内容和背景进行建模

Modeling Content and Context with Deep Relational Learning

论文作者

Pacheco, Maria Leonor, Goldwasser, Dan

论文摘要

实现现实的自然语言任务的建立模型需要处理长文本并考虑复杂的结构依赖性。神经符号表示已成为一种将符号方法推理能力与神经网络的表达相结合的方式。但是,大多数用于结合神经和符号表示形式的现有框架都是为经典的关系学习任务而设计的,这些任务是在符号实体和关系宇宙中运行的。在本文中,我们提出了Drail,这是一个开源声明的框架,用于指定深层关系模型,旨在支持各种NLP方案。我们的框架支持与表达语言编码器的轻松集成,并提供了一个界面来研究表示,推理和学习之间的相互作用。

Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic methods, with the expressiveness of neural networks. However, most of the existing frameworks for combining neural and symbolic representations have been designed for classic relational learning tasks that work over a universe of symbolic entities and relations. In this paper, we present DRaiL, an open-source declarative framework for specifying deep relational models, designed to support a variety of NLP scenarios. Our framework supports easy integration with expressive language encoders, and provides an interface to study the interactions between representation, inference and learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源