论文标题

复杂的阅读理解通过问题分解

Complex Reading Comprehension Through Question Decomposition

论文作者

Guo, Xiao-Yu, Li, Yuan-Fang, Haffari, Gholamreza

论文摘要

多跳阅读理解不仅需要对原始文本进行推理的能力,还需要结合多个证据的能力。我们提出了一种新颖的学习方法,可以帮助语言模型更好地理解困难的多跳问题并执行“复杂,构图”的推理。我们的模型首先学会通过可训练的问题分解器将每个多跳问题分解为几个子问题。我们没有回答这些子问题,而是直接将它们与原始问题和上下文相连,并利用阅读理解模型以序列到序列的方式预测答案。通过为这两个组件使用相同的语言模型,我们最佳的分离/统一T5基本变体在Drop DataSet的硬式子集上的绝对F1优于基线。

Multi-hop reading comprehension requires not only the ability to reason over raw text but also the ability to combine multiple evidence. We propose a novel learning approach that helps language models better understand difficult multi-hop questions and perform "complex, compositional" reasoning. Our model first learns to decompose each multi-hop question into several sub-questions by a trainable question decomposer. Instead of answering these sub-questions, we directly concatenate them with the original question and context, and leverage a reading comprehension model to predict the answer in a sequence-to-sequence manner. By using the same language model for these two components, our best seperate/unified t5-base variants outperform the baseline by 7.2/6.1 absolute F1 points on a hard subset of DROP dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源