论文标题

查询完善提示闭幕式长格式问题回答

Query Refinement Prompts for Closed-Book Long-Form Question Answering

论文作者

Amplayo, Reinald Kim, Webster, Kellie, Collins, Michael, Das, Dipanjan, Narayan, Shashi

论文摘要

大型语言模型(LLM)已显示在回答问题和制作长形式的文本方面表现良好,均以几乎没有拍摄的封闭方式设置。尽管可以使用众所周知的评估指标来验证前者,但后者很难评估。我们解决了通过同时执行两个任务来评估长期输出的困难 - 做需要长格式答案的问题。这些问题往往是多方面的,即,它们可能具有歧义和/或需要多个来源的信息。为此,我们定义了查询改进提示,鼓励LLMS明确表达问题中的多方面,并产生涵盖该问题多个方面的长形答案。我们对两个长形式问题答案数据集的实验ASQA和Aquamuse表明,使用我们的提示使我们能够在封闭的书籍设置中超过完全易登录的模型,并取得与检索 - 生成的开放书模型相当的结果。

Large language models (LLMs) have been shown to perform well in answering questions and in producing long-form texts, both in few-shot closed-book settings. While the former can be validated using well-known evaluation metrics, the latter is difficult to evaluate. We resolve the difficulties to evaluate long-form output by doing both tasks at once -- to do question answering that requires long-form answers. Such questions tend to be multifaceted, i.e., they may have ambiguities and/or require information from multiple sources. To this end, we define query refinement prompts that encourage LLMs to explicitly express the multifacetedness in questions and generate long-form answers covering multiple facets of the question. Our experiments on two long-form question answering datasets, ASQA and AQuAMuSe, show that using our prompts allows us to outperform fully finetuned models in the closed book setting, as well as achieve results comparable to retrieve-then-generate open-book models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源