论文标题

ReasonChainQA:基于文本的复杂问题,用可解释的证据链回答

ReasonChainQA: Text-based Complex Question Answering with Explainable Evidence Chains

论文作者

Zhu, Minjun, Weng, Yixuan, He, Shizhu, Liu, Kang, Zhao, Jun

论文摘要

过度证据的推理能力引起了人们越来越多的关注回答(QA)。最近,自然语言数据库(NLDB)以文本证据而不是结构化表示在知识库中进行复杂的质量检查,由于文本证据的灵活性和丰富性,此任务引起了很多关注。但是,现有的基于文本的复杂问题回答数据集无法提供明确的推理过程,而这对于检索有效性和推理解释性很重要。因此,我们提出了带有解释性和明确证据链的基准\ textbf {calionchainqa}。 ReasonChainQA由两个子任务组成:答案产生和证据链提取,它还包含具有不同深度,12种推理类型和78个关系的多跳问题的多样性。获得高质量的文本证据,以回答复杂的问题。对监督和无监督检索的其他实验完全表明了推理ChainQA的重要性。数据集和代码将在接受后公开提供。

The ability of reasoning over evidence has received increasing attention in question answering (QA). Recently, natural language database (NLDB) conducts complex QA in knowledge base with textual evidences rather than structured representations, this task attracts a lot of attention because of the flexibility and richness of textual evidence. However, existing text-based complex question answering datasets fail to provide explicit reasoning process, while it's important for retrieval effectiveness and reasoning interpretability. Therefore, we present a benchmark \textbf{ReasonChainQA} with explanatory and explicit evidence chains. ReasonChainQA consists of two subtasks: answer generation and evidence chains extraction, it also contains higher diversity for multi-hop questions with varying depths, 12 reasoning types and 78 relations. To obtain high-quality textual evidences for answering complex question. Additional experiment on supervised and unsupervised retrieval fully indicates the significance of ReasonChainQA. Dataset and codes will be made publicly available upon accepted.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源