论文标题

DSPGAN:DSP的时频域监督,用于高保真tts的基于GAN的通用声码器

DSPGAN: a GAN-based universal vocoder for high-fidelity TTS by time-frequency domain supervision from DSP

论文作者

Song, Kun, Zhang, Yongmao, Lei, Yi, Cong, Jian, Li, Hanzhao, Xie, Lei, He, Gang, Bai, Jinfeng

论文摘要

基于生成对抗神经网络(GAN)的神经声码编码器的最新开发显示出以快速推理速度和轻量级网络在MEL光谱上生成原始波形的明显优势。鉴于,训练可以通过各种场景与看不见的扬声器,语言和说话风格合成高保真演讲的通用神经声码器仍然具有挑战性。在本文中,我们提出了DSPGAN,这是一种基于GAN的通用Vocoder,用于应用数字信号处理(DSP)的时频域监督,用于高保真语音综合。 To eliminate the mismatch problem caused by the ground-truth spectrograms in the training phase and the predicted spectrograms in the inference phase, we leverage the mel-spectrogram extracted from the waveform generated by a DSP module, rather than the predicted mel-spectrogram from the Text-to-Speech (TTS) acoustic model, as the time-frequency domain supervision to the GAN-based vocoder.我们还利用正弦激发作为时间域的监督来改善谐波建模并消除基于GAN的Vocoder的各种工件。实验表明,DSPGAN明显胜过比较的方法,并且可以为使用多种数据训练的各种TTS模型生成高保真的语音。

Recent development of neural vocoders based on the generative adversarial neural network (GAN) has shown obvious advantages of generating raw waveform conditioned on mel-spectrogram with fast inference speed and lightweight networks. Whereas, it is still challenging to train a universal neural vocoder that can synthesize high-fidelity speech from various scenarios with unseen speakers, languages, and speaking styles. In this paper, we propose DSPGAN, a GAN-based universal vocoder for high-fidelity speech synthesis by applying the time-frequency domain supervision from digital signal processing (DSP). To eliminate the mismatch problem caused by the ground-truth spectrograms in the training phase and the predicted spectrograms in the inference phase, we leverage the mel-spectrogram extracted from the waveform generated by a DSP module, rather than the predicted mel-spectrogram from the Text-to-Speech (TTS) acoustic model, as the time-frequency domain supervision to the GAN-based vocoder. We also utilize sine excitation as the time-domain supervision to improve the harmonic modeling and eliminate various artifacts of the GAN-based vocoder. Experiments show that DSPGAN significantly outperforms the compared approaches and it can generate high-fidelity speech for various TTS models trained using diverse data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源