论文标题

多语言唇读的同步双向学习

Synchronous Bidirectional Learning for Multilingual Lip Reading

论文作者

Luo, Mingshuang, Yang, Shuang, Chen, Xilin, Liu, Zitao, Shan, Shiguang

论文摘要

近年来,嘴唇阅读受到了越来越多的关注。本文着重于多语言唇读的协同作用。世界上大约有多达7000种语言,这意味着用每种语言的大规模数据训练单独的唇读模型是不切实际的。尽管每种语言都有自己的语言和发音规则,但由于人体器官的共同结构,所有语言的唇部运动都具有相似的模式。基于这个想法,我们尝试探讨本文中多语言唇部阅读的协同学习,并进一步提出了同步的双向学习(SBL)框架,以有效地协同多种语言唇读。首先,我们将音素作为我们在此处的多语言设置的建模单元。与字母字母相比,音素与唇部运动密切相关。同时,无论目标语言是哪种类型,类似的音素总是会导致相似的视觉模式。然后,提出了一个新颖的SBL块,以填充的方式学习每种语言的规则。具体而言,该模型必须学会推断目标单元给定其双向上下文,这可以代表每种语言的音素组成规则。为了使学习过程更具针对每种特定语言的目标,在学习过程中引入了预测语言身份的额外任务。最后,对LRW(英语)和LRW-1000(普通话)进行了详尽的比较,这表明了不同语言的协同学习所带来的有希望的好处,还报告了两个数据集中的新最新结果。

Lip reading has received increasing attention in recent years. This paper focuses on the synergy of multilingual lip reading. There are about as many as 7000 languages in the world, which implies that it is impractical to train separate lip reading models with large-scale data for each language. Although each language has its own linguistic and pronunciation rules, the lip movements of all languages share similar patterns due to the common structures of human organs. Based on this idea, we try to explore the synergized learning of multilingual lip reading in this paper, and further propose a synchronous bidirectional learning (SBL) framework for effective synergy of multilingual lip reading. We firstly introduce phonemes as our modeling units for the multilingual setting here. Phonemes are more closely related with the lip movements than the alphabet letters. At the same time, similar phonemes always lead to similar visual patterns no matter which type the target language is. Then, a novel SBL block is proposed to learn the rules for each language in a fill-in-the-blank way. Specifically, the model has to learn to infer the target unit given its bidirectional context, which could represent the composition rules of phonemes for each language. To make the learning process more targeted at each particular language, an extra task of predicting the language identity is introduced in the learning process. Finally, a thorough comparison on LRW (English) and LRW-1000 (Mandarin) is performed, which shows the promising benefits from the synergized learning of different languages and also reports a new state-of-the-art result on both datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源