论文标题
短期深度学习的源源分离,以提高回合环境中的语音增强
Short-time deep-learning based source separation for speech enhancement in reverberant environments with beamforming
论文作者
论文摘要
本文解决了回响室内环境中基于源分离的语音增强问题。我们建议,更通用的解决方案应通过移动麦克风阵列或诸如基于语音的人类机器人交互或智能扬声器应用程序中的麦克风阵列或源源来应对时变的动态场景。基于ICA和NMF等统计模型的普通源分离方法的有效性取决于分析窗口大小,无法处理混响环境。为了解决这些局限性,提出了一种基于时间卷积网络的短期源分离方法与紧凑的双线性池结合使用。所提出的方案实际上与分析窗口大小无关,并且当缩短分析窗口为1.6时不会失去有效性,这反过来又非常有趣,可以解决随时间变化的方案中的源分离问题。同样,与ICA和NMF相比,具有多条件的恢复训练和测试以及随时间变化的SNR实验,以模拟移动的目标语音源,因此获得了高达80%的改善。最后,对采用拟议方案的清洁信号进行估计的实验和经过干净的ASR的实验比使用损坏的信号和经过多条件训练的ASR的ASR低13%。这一令人惊讶的结果与使用多条件训练的ASR系统的广泛采用的实践相矛盾,并加强了在HRI环境中使用语音增强方法的使用。
The source separation-based speech enhancement problem with multiple beamforming in reverberant indoor environments is addressed in this paper. We propose that more generic solutions should cope with time-varying dynamic scenarios with moving microphone array or sources such as those found in voice-based human-robot interaction or smart speaker applications. The effectiveness of ordinary source separation methods based on statistical models such as ICA and NMF depends on the analysis window size and cannot handle reverberation environments. To address these limitations, a short-term source separation method based on a temporal convolutional network in combination with compact bilinear pooling is presented. The proposed scheme is virtually independent of the analysis window size and does not lose effectiveness when the analysis window is shortened to 1.6s, which in turn is very interesting to tackle the source separation problem in time-varying scenarios. Also, improvements in WER as high as 80% were obtained when compared to ICA and NMF with multi-condition reverberant training and testing, and with time-varying SNR experiments to simulate a moving target speech source. Finally, the experiment with the estimation of the clean signal employing the proposed scheme and a clean trained ASR provided a WER 13% lower than the one obtained with the corrupted signal and a multi-condition trained ASR. This surprising result contradicts the widely adopted practice of using multi-condition trained ASR systems and reinforce the use of speech enhancement methods for user profiling in HRI environments.