论文标题

依赖电压的突触可塑性(VDSP):基于神经元膜电位的无监督概率Hebbian可塑性规则

Voltage-Dependent Synaptic Plasticity (VDSP): Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential

论文作者

Garg, Nikhil, Balafrej, Ismael, Stewart, Terrence C., Portal, Jean Michel, Bocquet, Marc, Querlioz, Damien, Drouin, Dominique, Rouat, Jean, Beilliard, Yann, Alibart, Fabien

论文摘要

这项研究提出了依赖电压突触可塑性(VDSP),这是一种新型的脑启发的无监督的本地学习规则,用于在线实施HEBB对神经形态硬件的可塑性机制。拟议的VDSP学习规则仅在突触后神经元的尖峰上更新突触电导,该突触率降低了两倍,而相对于标准的峰值依赖性可塑性(STDP),更新的数量。此更新取决于突触前神经元的膜电位,该神经元很容易作为神经元实现的一部分,因此不需要额外的存储器来存储。此外,该更新还针对突触重量进行了正规化,并防止重复刺激时的重量爆炸或消失。进行严格的数学分析以在VDSP和STDP之间提取等效性。为了验证VDSP的系统级性能,我们训练一个单层尖峰神经网络(SNN)识别手写数字。我们报告85.01 $ \ pm $ 0.76%(平均$ \ pm $ s.d。)对于MNIST数据集中的100个输出神经元网络的精度。在缩放网络大小时(89.93 $ \ pm $ 0.41%,400个输出神经元,90.56 $ \ pm $ \ pm $ 0.27)在缩放网络大小时会有所改善,这验证了所提出的学习规则对空间模式识别任务的适用性。未来的工作将考虑更复杂的任务。有趣的是,学习规则比STDP更好地适应输入信号的频率,并且不需要对超参数进行手工调整

This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb's plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike-timing-dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 $ \pm $ 0.76% (Mean $ \pm $ S.D.) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 $ \pm $ 0.41% for 400 output neurons, 90.56 $ \pm $ 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for spatial pattern recognition tasks. Future work will consider more complicated tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源