论文标题

BIFSMNV2:推动二进制神经网络以将关键字发现到实体网络性能

BiFSMNv2: Pushing Binary Neural Networks for Keyword Spotting to Real-Network Performance

论文作者

Qin, Haotong, Ma, Xudong, Ding, Yifu, Li, Xiaoyang, Zhang, Yang, Ma, Zejun, Wang, Jiakai, Luo, Jie, Liu, Xianglong

论文摘要

Deep神经网络(例如Deep-FSMN)已被广泛研究用于关键字发现(KWS)应用程序,同时遭受了昂贵的计算和存储空间。因此,研究了等网络压缩技术,例如在边缘部署KWS模型。在本文中,我们为KWS提供了强大而有效的二进制神经网络,即BifSMNV2,将其推向了现实网络的准确性性能。首先,我们提出了双尺度可稀释的1位体系结构,以通过双尺度激活二线化恢复二进制计算单元的表示能力,并从整体体系结构的角度从总体架构角度释放了加速潜力。其次,我们还为KWS双纳拉化训练构建了频率独立的蒸馏方案,该方案可独立地蒸馏出高和低频组件,以减轻全精度和二线化表示之间的信息不匹配。此外,我们提出了一种学习繁殖二进制器,这是一种通用有效的二进制器,使二进制KWS网络的向前和向后传播能够通过学习不断改进。我们使用新颖的快速位计算内核在ARMV8现实世界硬件上实现和部署BIFSMNV2,该硬件是为了充分利用寄存器并增加指令吞吐量的。综合实验表明,通过说服不同数据集的利润率,我们的BIFSMNV2优于KWS现有的二进制网络,并且与全精度网络达到了可比的准确性(语音命令v1-11-12下降只有1.51%的下降1.51%)。我们强调,Bifsmnv2受益于紧凑型体系结构和优化的硬件内核,可以在边缘硬件上实现令人印象深刻的25.1倍加速和20.2倍的存储空间。

Deep neural networks, such as the Deep-FSMN, have been widely studied for keyword spotting (KWS) applications while suffering expensive computation and storage. Therefore, network compression technologies like binarization are studied to deploy KWS models on edge. In this paper, we present a strong yet efficient binary neural network for KWS, namely BiFSMNv2, pushing it to the real-network accuracy performance. First, we present a Dual-scale Thinnable 1-bit-Architecture to recover the representation capability of the binarized computation units by dual-scale activation binarization and liberate the speedup potential from an overall architecture perspective. Second, we also construct a Frequency Independent Distillation scheme for KWS binarization-aware training, which distills the high and low-frequency components independently to mitigate the information mismatch between full-precision and binarized representations. Moreover, we propose the Learning Propagation Binarizer, a general and efficient binarizer that enables the forward and backward propagation of binary KWS networks to be continuously improved through learning. We implement and deploy the BiFSMNv2 on ARMv8 real-world hardware with a novel Fast Bitwise Computation Kernel, which is proposed to fully utilize registers and increase instruction throughput. Comprehensive experiments show our BiFSMNv2 outperforms existing binary networks for KWS by convincing margins across different datasets and achieves comparable accuracy with the full-precision networks (only a tiny 1.51% drop on Speech Commands V1-12). We highlight that benefiting from the compact architecture and optimized hardware kernel, BiFSMNv2 can achieve an impressive 25.1x speedup and 20.2x storage-saving on edge hardware.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源