论文标题

一个可解释的神经符号推理框架,用于以任务为导向的对话生成

An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented Dialogue Generation

论文作者

Yang, Shiquan, Zhang, Rui, Erfani, Sarah, Lau, Jey Han

论文摘要

我们在本文中研究了面向任务的对话系统的可解释性问题。以前,大多数基于神经任务的对话系统采用了隐性的推理策略,使模型预测人类无法解释。为了获得透明的推理过程,我们引入了神经符号,以执行明确的推理,以通过推理链来证明模型决策合理。由于得出推理链需要以任务为导向的对话进行多跳的推理,因此现有的神经符号方法将由于单相设计而引起错误传播。为了克服这一点,我们提出了一种由假设发生器和推理者组成的两相方法。我们首先通过假设生成器获得多个假设,即执行所需任务的潜在操作。然后,应由推理验证每个假设,并选择有效的假设以进行最终预测。整个系统是通过在不使用任何推理链注释的情况下利用原始文本对话来训练的。对两个公共基准数据集的实验研究表明,所提出的方法不仅取得了更好的结果,而且还引入了可解释的决策过程。

We study the interpretability issue of task-oriented dialogue systems in this paper. Previously, most neural-based task-oriented dialogue systems employ an implicit reasoning strategy that makes the model predictions uninterpretable to humans. To obtain a transparent reasoning process, we introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains. Since deriving reasoning chains requires multi-hop reasoning for task-oriented dialogues, existing neuro-symbolic approaches would induce error propagation due to the one-phase design. To overcome this, we propose a two-phase approach that consists of a hypothesis generator and a reasoner. We first obtain multiple hypotheses, i.e., potential operations to perform the desired task, through the hypothesis generator. Each hypothesis is then verified by the reasoner, and the valid one is selected to conduct the final prediction. The whole system is trained by exploiting raw textual dialogues without using any reasoning chain annotations. Experimental studies on two public benchmark datasets demonstrate that the proposed approach not only achieves better results, but also introduces an interpretable decision process.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源