论文标题

通过基于图的语义建模构建知识接地的对话系统

Building Knowledge-Grounded Dialogue Systems with Graph-Based Semantic Modeling

论文作者

Yang, Yizhe, Huang, Heyan, Gao, Yang, and, Jiawei Li

论文摘要

知识的对话任务旨在产生回答,从而传达从给定知识文件中传达信息。但是,对于当前基于序列的模型,从复杂文档中获取知识并将其集成以执行正确的响应是一个挑战,而无需明确的语义结构。为了解决这些问题,我们提出了一种新颖的图形结构,即扎根图($ g^2 $),该图对话和知识的语义结构进行了建模,以促进知识选择和集成知识的对话生成。我们还提出了一个接地的图形识别变压器($ g^2at $)模型,该模型融合了多种知识(顺序和图形),以增强知识接收的响应生成。我们的实验结果表明,我们提出的模型优于先前的最先进方法,其响应产生超过10%,事实一致性的增长近20 \%。此外,我们的模型揭示了良好的概括能力和鲁棒性。通过将语义结构纳入深神经网络中的先验知识,我们的模型提供了一种有效的方法来帮助语言产生。

The knowledge-grounded dialogue task aims to generate responses that convey information from given knowledge documents. However, it is a challenge for the current sequence-based model to acquire knowledge from complex documents and integrate it to perform correct responses without the aid of an explicit semantic structure. To address these issues, we propose a novel graph structure, Grounded Graph ($G^2$), that models the semantic structure of both dialogue and knowledge to facilitate knowledge selection and integration for knowledge-grounded dialogue generation. We also propose a Grounded Graph Aware Transformer ($G^2AT$) model that fuses multi-forms knowledge (both sequential and graphic) to enhance knowledge-grounded response generation. Our experiments results show that our proposed model outperforms the previous state-of-the-art methods with more than 10\% gains in response generation and nearly 20\% improvement in factual consistency. Further, our model reveals good generalization ability and robustness. By incorporating semantic structures as prior knowledge in deep neural networks, our model provides an effective way to aid language generation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源