论文标题

Gazby:基于目光的BERT模型,将人类注意力纳入神经信息检索

GazBy: Gaze-Based BERT Model to Incorporate Human Attention in Neural Information Retrieval

论文作者

Dong, Sibo, Goldstein, Justin, Yang, Grace Hui

论文摘要

本文有兴趣研究是否可以利用人类的凝视信号来提高最新的搜索引擎性能以及如何将人类注意力标记的新输入信号纳入现有的神经检索模型中。在本文中,我们提出了基于文档相关的文档{\ bf bf b} e的GAZBY({\ bf gaz} e相关{\ bf y}),这是一个轻巧的关节模型,将人类凝视固定估计整合到变形金刚中,以预测文档相关性,以预测文档相关性,将有关认知过程的更多细微的信息结合到信息中。我们在2019年和2020年的曲目中评估了文本检索会议(TREC)深度学习(DL)的模型。我们的实验表明了令人鼓舞的结果,并说明了使用人类注视来帮助基于变压器的神经夺回者的有效和无效的入口点。随着虚拟现实(VR)和增强现实(AR)的兴起,人类目光的数据将变得更加可用。我们希望这项工作是在现代神经搜索引擎中使用凝视信号探索的第一步。

This paper is interested in investigating whether human gaze signals can be leveraged to improve state-of-the-art search engine performance and how to incorporate this new input signal marked by human attention into existing neural retrieval models. In this paper, we propose GazBy ({\bf Gaz}e-based {\bf B}ert model for document relevanc{\bf y}), a light-weight joint model that integrates human gaze fixation estimation into transformer models to predict document relevance, incorporating more nuanced information about cognitive processing into information retrieval (IR). We evaluate our model on the Text Retrieval Conference (TREC) Deep Learning (DL) 2019 and 2020 Tracks. Our experiments show encouraging results and illustrate the effective and ineffective entry points for using human gaze to help with transformer-based neural retrievers. With the rise of virtual reality (VR) and augmented reality (AR), human gaze data will become more available. We hope this work serves as a first step exploring using gaze signals in modern neural search engines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源