论文标题

8位神经网络加速器的低下8位量化培训,并具有在设备上的语音识别

Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition

论文作者

Zhen, Kai, Nguyen, Hieu Duy, Chinta, Raviteja, Susanj, Nathan, Mouchtaris, Athanasios, Afzal, Tariq, Rastrow, Ariya

论文摘要

我们提出了一种针对8位神经网络加速器的新型8位量化训练(S8BQAT)方案。我们的方法灵感来自Lloyd-Max压缩理论,其实际适应性适应训练期间可行的计算开销。通过量化质心衍生自32位基线,我们使用多区域绝对余弦(MRACOS)正常化器增加训练损失,该培训将重量汇总到其最近的质心,有效地充当伪压缩机。此外,引入了定期调用的硬压缩机,以通过模拟运行时模型重量量化来提高收敛速率。我们将S8BQAT应用于语音识别任务,使用经常性神经网络TransDucer(RNN-T)体系结构。使用S8BQAT,我们能够将模型参数大小增加,以将单词错误率相对降低4-16%,同时仍将延迟提高5%。

We present a novel sub-8-bit quantization-aware training (S8BQAT) scheme for 8-bit neural network accelerators. Our method is inspired from Lloyd-Max compression theory with practical adaptations for a feasible computational overhead during training. With the quantization centroids derived from a 32-bit baseline, we augment training loss with a Multi-Regional Absolute Cosine (MRACos) regularizer that aggregates weights towards their nearest centroid, effectively acting as a pseudo compressor. Additionally, a periodically invoked hard compressor is introduced to improve the convergence rate by emulating runtime model weight quantization. We apply S8BQAT on speech recognition tasks using Recurrent Neural NetworkTransducer (RNN-T) architecture. With S8BQAT, we are able to increase the model parameter size to reduce the word error rate by 4-16% relatively, while still improving latency by 5%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源