论文标题
基于眼神的人类相互作用研究的自动分析
Automated analysis of eye-tracker-based human-human interaction studies
论文作者
论文摘要
移动目光跟踪系统已有大约十年了,并且在不同的应用领域越来越流行,包括营销,社会学,可用性研究和语言学。尽管硬件的用户友好性和人体工程学正在快速开发,但在某些方面分析移动眼睛跟踪数据的软件仍然缺乏稳健性和功能。在本文中,我们调查了哪种最先进的计算机视觉算法可以用来自动化移动眼球传播数据的分析。对于本文的案例研究,我们专注于在人类人类面对面互动期间进行的移动眼球录音。我们比较了两个最近公开可用的框架(Yolov2和OpenPose),以将目光射击器与头部和手相机在场景摄像机数据中可见相关联。在本文中,我们将证明该单层线框架的使用提供了可靠的结果,这比现场的以前工作更准确,更快。此外,在此过程中,我们的方法不依赖手动干预措施。
Mobile eye-tracking systems have been available for about a decade now and are becoming increasingly popular in different fields of application, including marketing, sociology, usability studies and linguistics. While the user-friendliness and ergonomics of the hardware are developing at a rapid pace, the software for the analysis of mobile eye-tracking data in some points still lacks robustness and functionality. With this paper, we investigate which state-of-the-art computer vision algorithms may be used to automate the post-analysis of mobile eye-tracking data. For the case study in this paper, we focus on mobile eye-tracker recordings made during human-human face-to-face interactions. We compared two recent publicly available frameworks (YOLOv2 and OpenPose) to relate the gaze location generated by the eye-tracker to the head and hands visible in the scene camera data. In this paper we will show that the use of this single-pipeline framework provides robust results, which are both more accurate and faster than previous work in the field. Moreover, our approach does not rely on manual interventions during this process.