论文标题

具有声学和文本噪声的发现数据的对抗性特征学习和基于聚类的语音综合

Adversarial Feature Learning and Unsupervised Clustering based Speech Synthesis for Found Data with Acoustic and Textual Noise

论文作者

Yang, Shan, Wang, Yuxuan, Xie, Lei

论文摘要

基于注意的序列到序列(SEQ2SEQ)语音综合已达到了非凡的表现。但是,对于训练此类SEQ2SEQ系统,必须使用手动转录的工作室质量语料库。在本文中,我们提出了一种使用具有挑战性的数据来​​构建基于SEQ2SEQ的高质量且稳定的语音合成系统的方法,其中培训语音包含嘈杂的干扰(声学噪声),文本是不完善的语音识别转录本(文本噪声)。为了处理文本侧噪声,我们提出了一种基于VQVAE的启发式方法,以直接从语音中学到的语音信息来补偿错误的语言特征。至于语音侧噪声,我们建议通过对抗性训练和数据增强来学习自动回归解码器中与噪声无关的功能,这不需要额外的语音增强模型。实验显示了所提出的方法在处理文本侧和语音侧噪声方面的有效性。我们基于嘈杂的系统,通过基于最先进的语音增强模型超越了基于最先进的语音增强模型的Denoising方法,发现数据可以合成清洁和高质量的语音,而MOS靠近构建在干净对应物上的系统。

Attention-based sequence-to-sequence (seq2seq) speech synthesis has achieved extraordinary performance. But a studio-quality corpus with manual transcription is necessary to train such seq2seq systems. In this paper, we propose an approach to build high-quality and stable seq2seq based speech synthesis system using challenging found data, where training speech contains noisy interferences (acoustic noise) and texts are imperfect speech recognition transcripts (textual noise). To deal with text-side noise, we propose a VQVAE based heuristic method to compensate erroneous linguistic feature with phonetic information learned directly from speech. As for the speech-side noise, we propose to learn a noise-independent feature in the auto-regressive decoder through adversarial training and data augmentation, which does not need an extra speech enhancement model. Experiments show the effectiveness of the proposed approach in dealing with text-side and speech-side noise. Surpassing the denoising approach based on a state-of-the-art speech enhancement model, our system built on noisy found data can synthesize clean and high-quality speech with MOS close to the system built on the clean counterpart.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源