论文标题

探索将自然语言解析为一阶逻辑的神经模型

Exploring Neural Models for Parsing Natural Language into First-Order Logic

论文作者

Singh, Hrituraj, Aggrawal, Milan, Krishnamurthy, Balaji

论文摘要

语义解析是从自然语言文本中获取机器解释表示的任务。我们认为一种形式上的代表性 - 一阶逻辑(fol),并探索神经模型的能力,以解析英语句子给FOL。我们将解析作为序列映射任务的序列建模,在给定自然语言句子的情况下,它使用LSTM编码为中间表示,然后是解码器,该解码器在相应的FOL公式中顺序生成谓词。我们通过引入可变量对齐机制来改善标准编码器模型,该模型使其能够使其能够在预测的for中对齐变量。我们进一步显示了预测FOL实体类别的有效性 - 一元,二进制,变量和范围的实体,在每个解码器步骤中,作为提高生成的人的一致性的辅助任务。我们进行严格的评估和广泛的消融。我们还旨在发布我们的代码以及大型FOL数据集以及模型,以帮助进一步研究基于逻辑的解析和NLP推断。

Semantic parsing is the task of obtaining machine-interpretable representations from natural language text. We consider one such formal representation - First-Order Logic (FOL) and explore the capability of neural models in parsing English sentences to FOL. We model FOL parsing as a sequence to sequence mapping task where given a natural language sentence, it is encoded into an intermediate representation using an LSTM followed by a decoder which sequentially generates the predicates in the corresponding FOL formula. We improve the standard encoder-decoder model by introducing a variable alignment mechanism that enables it to align variables across predicates in the predicted FOL. We further show the effectiveness of predicting the category of FOL entity - Unary, Binary, Variables and Scoped Entities, at each decoder step as an auxiliary task on improving the consistency of generated FOL. We perform rigorous evaluations and extensive ablations. We also aim to release our code as well as large scale FOL dataset along with models to aid further research in logic-based parsing and inference in NLP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源