论文标题
基于扩散的生成语音源分离
Diffusion-based Generative Speech Source Separation
论文作者
论文摘要
我们提出了基于随机微分方程(SDE)的得分匹配的新单个通道源分离方法DiffSep。我们制作了一个从分离的来源开始的定制的连续时间扩散过程,然后收敛到以其混合物为中心的高斯分配。这种公式使我们能够应用基于得分的生成建模的机械。首先,我们训练神经网络以近似边缘概率或扩散混合过程的得分函数。然后,我们使用它来求解反向时间SDE,从而逐渐将源与混合物开始的源分开。我们提出了一种修改的培训策略,以处理模型不匹配和源置换模棱两可。 WSJ0 2MIX数据集上的实验证明了该方法的潜力。此外,该方法还适用于语音增强功能,并在语音库数据集上的先前工作中显示出表演竞争力。
We propose DiffSep, a new single channel source separation method based on score-matching of a stochastic differential equation (SDE). We craft a tailored continuous time diffusion-mixing process starting from the separated sources and converging to a Gaussian distribution centered on their mixture. This formulation lets us apply the machinery of score-based generative modelling. First, we train a neural network to approximate the score function of the marginal probabilities or the diffusion-mixing process. Then, we use it to solve the reverse time SDE that progressively separates the sources starting from their mixture. We propose a modified training strategy to handle model mismatch and source permutation ambiguity. Experiments on the WSJ0 2mix dataset demonstrate the potential of the method. Furthermore, the method is also suitable for speech enhancement and shows performance competitive with prior work on the VoiceBank-DEMAND dataset.