论文标题
视频变形金刚在动作识别中的无监督域改编
Unsupervised Domain Adaptation for Video Transformers in Action Recognition
论文作者
论文摘要
在过去的几年中,无监督的域适应性(UDA)技术在计算机视觉中具有显着的重要性和知名度。但是,与可用于图像的广泛文献相比,视频领域仍然相对尚未探索。另一方面,动作识别模型的性能受到域转移的严重影响。在本文中,我们提出了一种简单新颖的UDA方法,以识别视频动作识别。我们的方法利用了时空变压器的最新进展来构建一个强大的源模型,从而更好地概括了目标域。此外,由于引入了来自信息瓶颈原则的新颖对齐损失术语,我们的架构将学习域不变功能。我们报告了UDA的两个视频动作识别基准的结果,显示了HMDB $ \ leftrightArrow $ ucf的最先进性能,以及动力学$ \ rightarrow $ nec-nec-Drone,这更具挑战性。这证明了我们方法在处理不同级别的域转移方面的有效性。源代码可在https://github.com/vturrisi/udavt上找到。
Over the last few years, Unsupervised Domain Adaptation (UDA) techniques have acquired remarkable importance and popularity in computer vision. However, when compared to the extensive literature available for images, the field of videos is still relatively unexplored. On the other hand, the performance of a model in action recognition is heavily affected by domain shift. In this paper, we propose a simple and novel UDA approach for video action recognition. Our approach leverages recent advances on spatio-temporal transformers to build a robust source model that better generalises to the target domain. Furthermore, our architecture learns domain invariant features thanks to the introduction of a novel alignment loss term derived from the Information Bottleneck principle. We report results on two video action recognition benchmarks for UDA, showing state-of-the-art performance on HMDB$\leftrightarrow$UCF, as well as on Kinetics$\rightarrow$NEC-Drone, which is more challenging. This demonstrates the effectiveness of our method in handling different levels of domain shift. The source code is available at https://github.com/vturrisi/UDAVT.