论文标题
从视听空间对齐中学习表示
Learning Representations from Audio-Visual Spatial Alignment
论文作者
论文摘要
我们介绍了一项新颖的自我监督借口,以从视听内容中学习表示表示。关于视听表示的先前工作在视频级别上利用对应关系。基于视听通讯(AVC)的方法预测音频和视频剪辑是源自相同还是不同的视频实例。视听时间同步(AVT)进一步区分了负对源自同一视频实例,但在不同的时刻。尽管这些方法学习了诸如行动识别之类的下游任务的高质量表示,但他们的训练目标无视音频和视觉信号中自然出现的空间线索。要从这些空间提示中学习,我们任务了一个网络,以执行360°视频和空间音频的对比视听空间对齐。通过使用变压器体系结构对360°视频的完整空间内容进行推理,从多个角度组合表示表示,从而增强了执行空间对齐的能力。提出的借口任务的优点在各种音频和下游任务上都证明了,包括视听通讯,空间对准,动作识别和视频语义分段。
We introduce a novel self-supervised pretext task for learning representations from audio-visual content. Prior work on audio-visual representation learning leverages correspondences at the video level. Approaches based on audio-visual correspondence (AVC) predict whether audio and video clips originate from the same or different video instances. Audio-visual temporal synchronization (AVTS) further discriminates negative pairs originated from the same video instance but at different moments in time. While these approaches learn high-quality representations for downstream tasks such as action recognition, their training objectives disregard spatial cues naturally occurring in audio and visual signals. To learn from these spatial cues, we tasked a network to perform contrastive audio-visual spatial alignment of 360° video and spatial audio. The ability to perform spatial alignment is enhanced by reasoning over the full spatial content of the 360° video using a transformer architecture to combine representations from multiple viewpoints. The advantages of the proposed pretext task are demonstrated on a variety of audio and visual downstream tasks, including audio-visual correspondence, spatial alignment, action recognition, and video semantic segmentation.