论文标题
从原始数据共同学习视觉和听觉语音表示
Jointly Learning Visual and Auditory Speech Representations from Raw Data
论文作者
论文摘要
我们提出了Raven,这是一种共同学习视觉和听觉语音表示形式的自我监管的多模式方法。我们的预训练目标涉及编码掩盖的输入,然后预测通过缓慢发展的动量编码产生的上下文化目标。在视频和音频之间固有的差异的驱动下,我们的设计是不对称的W.R.T.两种模式的借口任务:虽然听觉流既可以预测视觉和听觉目标,又可以预测视觉目标。当对单个预训练阶段产生的视觉和听觉编码器进行微调时,我们会在低资源和高资源标记的数据设置中观察到很强的结果,该阶段共同训练编码器。值得注意的是,Raven超过了LRS3上视觉语音识别(VSR)的所有自我监督方法,并且仅使用30个小时的标签数据将Raven与自我培训相结合,甚至超过了最近在90,000个非公开数据的90,000小时的半监视方法。同时,我们在听觉语音识别(以及VSR)的LRS3低资源设置中实现了最新的结果。我们的发现表明,完全从原始视频和音频(即,不依赖手工制作的功能)学习强大的语音表示的可行性。代码和模型可在https://github.com/ahaliassos/raven上找到。
We present RAVEn, a self-supervised multi-modal approach to jointly learn visual and auditory speech representations. Our pre-training objective involves encoding masked inputs, and then predicting contextualised targets generated by slowly-evolving momentum encoders. Driven by the inherent differences between video and audio, our design is asymmetric w.r.t. the two modalities' pretext tasks: Whereas the auditory stream predicts both the visual and auditory targets, the visual one predicts only the auditory targets. We observe strong results in low- and high-resource labelled data settings when fine-tuning the visual and auditory encoders resulting from a single pre-training stage, in which the encoders are jointly trained. Notably, RAVEn surpasses all self-supervised methods on visual speech recognition (VSR) on LRS3, and combining RAVEn with self-training using only 30 hours of labelled data even outperforms a recent semi-supervised method trained on 90,000 hours of non-public data. At the same time, we achieve state-of-the-art results in the LRS3 low-resource setting for auditory speech recognition (as well as for VSR). Our findings point to the viability of learning powerful speech representations entirely from raw video and audio, i.e., without relying on handcrafted features. Code and models are available at https://github.com/ahaliassos/raven.