论文标题

凝视估算的对比表示学习

Contrastive Representation Learning for Gaze Estimation

论文作者

Jindal, Swati, Manduchi, Roberto

论文摘要

自我监督的学习(SSL)已成为计算机视觉中学习表现的普遍性。值得注意的是,SSL利用对比度学习,以鼓励视觉表示在各种图像转换下不变。另一方面,凝视估计的任务不仅需要各种外观的不变性,而且还需要对几何变换的等效性。在这项工作中,我们提出了一个简单的对比表示学习框架,以进行凝视估计,称为“凝视对比度学习”(GAZECLR)。 GAZECLR利用多视图数据来促进均衡性,并依靠所选的数据增强技术,这些技术不会改变不变性学习的凝视方向。我们的实验证明了GazeClR在凝视估计任务的几种设置中的有效性。特别是,我们的结果表明,GazeCLR提高了跨域凝视估计的性能,并且相对改善的产量高达17.2%。此外,GazeCLR框架具有最先进的表示方法的竞争,以进行几次评估。代码和预培训模型可在https://github.com/jswati31/gazeclr上找到。

Self-supervised learning (SSL) has become prevalent for learning representations in computer vision. Notably, SSL exploits contrastive learning to encourage visual representations to be invariant under various image transformations. The task of gaze estimation, on the other hand, demands not just invariance to various appearances but also equivariance to the geometric transformations. In this work, we propose a simple contrastive representation learning framework for gaze estimation, named Gaze Contrastive Learning (GazeCLR). GazeCLR exploits multi-view data to promote equivariance and relies on selected data augmentation techniques that do not alter gaze directions for invariance learning. Our experiments demonstrate the effectiveness of GazeCLR for several settings of the gaze estimation task. Particularly, our results show that GazeCLR improves the performance of cross-domain gaze estimation and yields as high as 17.2% relative improvement. Moreover, the GazeCLR framework is competitive with state-of-the-art representation learning methods for few-shot evaluation. The code and pre-trained models are available at https://github.com/jswati31/gazeclr.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源