论文标题

上下文感知的答案提取有问题的答案

Context-Aware Answer Extraction in Question Answering

论文作者

Seonwoo, Yeon, Kim, Ji-Hoon, Ha, Jung-Woo, Oh, Alice

论文摘要

提取性质量检查模型在预测给定段落的问题的正确答案时表现出非常有希望的性能。但是,它们有时会导致预测正确的答案文本,但在与给定问题无关的情况下。随着段落中答案文本的发生数量的增加,这种差异变得尤为重要。 To resolve this issue, we propose \textbf{BLANC} (\textbf{BL}ock \textbf{A}ttentio\textbf{N} for \textbf{C}ontext prediction) based on two main ideas: context prediction as an auxiliary task in multi-task learning manner, and a block attention method that learns the context prediction task.通过有关阅读理解的实验,我们表明Blanc的表现胜过最新的质量检查模型,并且随着答案文本的发生数量的增加,性能差距会增加。我们还进行了一个实验,使用小队训练模型并预测HOTPOTQA上的支持事实,并表明Blanc在此零局部设置中胜过所有基线模型。

Extractive QA models have shown very promising performance in predicting the correct answer to a question for a given passage. However, they sometimes result in predicting the correct answer text but in a context irrelevant to the given question. This discrepancy becomes especially important as the number of occurrences of the answer text in a passage increases. To resolve this issue, we propose \textbf{BLANC} (\textbf{BL}ock \textbf{A}ttentio\textbf{N} for \textbf{C}ontext prediction) based on two main ideas: context prediction as an auxiliary task in multi-task learning manner, and a block attention method that learns the context prediction task. With experiments on reading comprehension, we show that BLANC outperforms the state-of-the-art QA models, and the performance gap increases as the number of answer text occurrences increases. We also conduct an experiment of training the models using SQuAD and predicting the supporting facts on HotpotQA and show that BLANC outperforms all baseline models in this zero-shot setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源