论文标题
一种自然语言理解的神经符号方法
A Neural-Symbolic Approach to Natural Language Understanding
论文作者
论文摘要
由预训练的语言模型授权的深度神经网络在自然语言理解(NLU)任务方面取得了显着的结果。但是,当需要逻辑推理时,他们的性能可能会大大恶化。这是因为NLU原则上不仅取决于类似的推理,而且深层神经网络擅长,而且还取决于逻辑推理。根据双过程理论,人脑中的系统1和系统2分别进行了类似推理和逻辑推理。受理论的启发,我们为NLU提供了一个名为Neural-Symbolic处理器(NSP)的新型框架,该框架基于神经处理和基于神经和符号处理的逻辑推理执行类比推理。作为一个案例研究,我们必须对两个NLU任务,问题答案(QA)和自然语言推理(NLI)进行实验,当时需要数值推理(一种逻辑推理)。实验结果表明,我们的方法在这两个任务中都显着胜过最先进的方法。
Deep neural networks, empowered by pre-trained language models, have achieved remarkable results in natural language understanding (NLU) tasks. However, their performances can drastically deteriorate when logical reasoning is needed. This is because NLU in principle depends on not only analogical reasoning, which deep neural networks are good at, but also logical reasoning. According to the dual-process theory, analogical reasoning and logical reasoning are respectively carried out by System 1 and System 2 in the human brain. Inspired by the theory, we present a novel framework for NLU called Neural-Symbolic Processor (NSP), which performs analogical reasoning based on neural processing and logical reasoning based on both neural and symbolic processing. As a case study, we conduct experiments on two NLU tasks, question answering (QA) and natural language inference (NLI), when numerical reasoning (a type of logical reasoning) is necessary. The experimental results show that our method significantly outperforms state-of-the-art methods in both tasks.