论文标题

人工神经网络模型的低潜伏期转换为代码编码的尖峰神经网络

Low Latency Conversion of Artificial Neural Network Models to Rate-encoded Spiking Neural Networks

论文作者

Yan, Zhanglu, Zhou, Jun, Wong, Weng-Fai

论文摘要

尖峰神经网络(SNN)非常适合资源受限的应用程序,因为它们不需要昂贵的乘数。在典型的速率编码的SNN中,全球固定时间窗口中的一系列二元尖峰用于发射神经元。在此时间窗口中,峰值的最大数量也是网络执行单个推断的潜伏期,并确定了模型的整体能源效率。本文的目的是在将ANN转换为等效SNN时保持准确性,同时保持准确性。最先进的转换方案产生的SNN具有与大窗户大小相当的精度。在本文中,我们首先要了解从现有的ANN模型转换为标准速率编码的SNN模型时的信息丢失。从这些见解中,我们提出了一套新型技术,共同减轻了转换中丢失的信息,并实现了最先进的SNN精确度以及非常低的延迟。我们的方法在MNIST数据集上实现了98.73%(1个时间步)的前1个SNN精度,在CIFAR-100数据集中获得了76.38%(8个时间步),而CIFAR-100数据集中的93.71%(8个时间步)。在Imagenet上,通过100/200的时间步骤实现了75.35%/79.16%的SNN精度。

Spiking neural networks (SNNs) are well suited for resource-constrained applications as they do not need expensive multipliers. In a typical rate-encoded SNN, a series of binary spikes within a globally fixed time window is used to fire the neurons. The maximum number of spikes in this time window is also the latency of the network in performing a single inference, as well as determines the overall energy efficiency of the model. The aim of this paper is to reduce this while maintaining accuracy when converting ANNs to their equivalent SNNs. The state-of-the-art conversion schemes yield SNNs with accuracies comparable with ANNs only for large window sizes. In this paper, we start with understanding the information loss when converting from pre-existing ANN models to standard rate-encoded SNN models. From these insights, we propose a suite of novel techniques that together mitigate the information lost in the conversion, and achieve state-of-art SNN accuracies along with very low latency. Our method achieved a Top-1 SNN accuracy of 98.73% (1 time step) on the MNIST dataset, 76.38% (8 time steps) on the CIFAR-100 dataset, and 93.71% (8 time steps) on the CIFAR-10 dataset. On ImageNet, an SNN accuracy of 75.35%/79.16% was achieved with 100/200 time steps.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源