论文标题

使用矩阵产品操作员进行语音增强的模型压缩方法

A Model Compression Method with Matrix Product Operators for Speech Enhancement

论文作者

Sun, Xingwei, Gao, Ze-Feng, Lu, Zhong-Yi, Li, Junfeng, Yan, Yonghong

论文摘要

基于深度神经网络(DNN)的语音增强方法已达到有希望的表现。但是,这些方法所涉及的参数数量通常对于具有有限资源的设备上语音增强的真实应用通常是巨大的。这严重限制了应用程序。为了解决这个问题,正在广泛研究模型压缩技术。在本文中,我们提出了一种基于基质产品运营商(MPO)的模型压缩方法,以大大减少DNN模型中的参数数量以增强语音。在这种方法中,在训练前,神经网络模型的线性转换中的重量矩阵被MPO分解格式取代。在实验中,此过程应用于因果神经网络模型,例如前馈多层感知器(MLP)和长期记忆(LSTM)模型。随后,使用/没有压缩的MLP和LSTM模型都用于估计单声道语音增强的理想比率掩模。实验结果表明,我们提出的基于MPO的方法的表现优于各种压缩率下的广泛使用的修剪方法来增强语音,并且对于低压缩率,可以实现进一步的改进。我们的建议提供了一种有效的模型压缩方法,以增强语音,尤其是在无云应用中。

The deep neural network (DNN) based speech enhancement approaches have achieved promising performance. However, the number of parameters involved in these methods is usually enormous for the real applications of speech enhancement on the device with the limited resources. This seriously restricts the applications. To deal with this issue, model compression techniques are being widely studied. In this paper, we propose a model compression method based on matrix product operators (MPO) to substantially reduce the number of parameters in DNN models for speech enhancement. In this method, the weight matrices in the linear transformations of neural network model are replaced by the MPO decomposition format before training. In experiment, this process is applied to the causal neural network models, such as the feedforward multilayer perceptron (MLP) and long short-term memory (LSTM) models. Both MLP and LSTM models with/without compression are then utilized to estimate the ideal ratio mask for monaural speech enhancement. The experimental results show that our proposed MPO-based method outperforms the widely-used pruning method for speech enhancement under various compression rates, and further improvement can be achieved with respect to low compression rates. Our proposal provides an effective model compression method for speech enhancement, especially in cloud-free application.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源