论文标题

Paraformer:非自动回归端到端语音识别的快速准确的平行变压器

Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition

论文作者

Gao, Zhifu, Zhang, Shiliang, McLoughlin, Ian, Yan, Zhijie

论文摘要

变形金刚最近统治了ASR领域。尽管能够产生良好的性能,但它们涉及自回旋(AR)解码器,以一一生成令牌,这在计算上效率低下。为了加快推断,非自动回归(NAR)方法,例如设计单步nar,以实现平行生成。但是,由于输出令牌内的独立性假设,单步nar的性能不如AR模型,尤其是在大型语料库中。改善单步nar面临两个挑战:首先,准确预测输出令牌的数量并提取隐藏的变量;其次,以增强输出令牌之间的相互依赖性建模。为了应对这两个挑战,我们提出了一个快速准确的平行变压器,称为偏形象征器。这利用了连续的基于集成和火的预测器来预测令牌的数量并生成隐藏的变量。然后,浏览语言模型(GLM)采样器会生成语义嵌入,以增强NAR解码器建模上下文相互依存的能力。最后,我们设计了一种策略来生成负面样本,以进行最小单词错误率训练以进一步提高性能。使用公共Aishell-1,Aishell-2基准和工业级别的20,000小时任务的实验表明,拟议的Paraformer可以具有与最先进的AR变压器相当的性能,并具有超过10倍的速度。

Transformers have recently dominated the ASR field. Although able to yield good performance, they involve an autoregressive (AR) decoder to generate tokens one by one, which is computationally inefficient. To speed up inference, non-autoregressive (NAR) methods, e.g. single-step NAR, were designed, to enable parallel generation. However, due to an independence assumption within the output tokens, performance of single-step NAR is inferior to that of AR models, especially with a large-scale corpus. There are two challenges to improving single-step NAR: Firstly to accurately predict the number of output tokens and extract hidden variables; secondly, to enhance modeling of interdependence between output tokens. To tackle both challenges, we propose a fast and accurate parallel transformer, termed Paraformer. This utilizes a continuous integrate-and-fire based predictor to predict the number of tokens and generate hidden variables. A glancing language model (GLM) sampler then generates semantic embeddings to enhance the NAR decoder's ability to model context interdependence. Finally, we design a strategy to generate negative samples for minimum word error rate training to further improve performance. Experiments using the public AISHELL-1, AISHELL-2 benchmark, and an industrial-level 20,000 hour task demonstrate that the proposed Paraformer can attain comparable performance to the state-of-the-art AR transformer, with more than 10x speedup.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源