论文标题

进行尖刺的网络工作:强大的类似大脑的无监督机器学习

Making a Spiking Net Work: Robust brain-like unsupervised machine learning

论文作者

Stratton, Peter G., Wabnitz, Andrew, Essam, Chip, Cheung, Allen, Hamilton, Tara J.

论文摘要

过去十年来,人们对人工智能(AI)的兴趣激增几乎完全由人工神经网络(ANN)的进步驱动。 ANN为许多以前棘手的问题设定了最新的绩效,但使用全球梯度下降需要大量数据集和计算资源进行培训,从而可能限制了其对现实世界域的可扩展性。尖峰神经网络(SNN)是使用更类似脑部神经元的ANN的替代方法,可以使用局部无监督的学习来快速发现输入数据中的稀疏可识别功能。但是,SNN在动态稳定性方面挣扎,并且未能与ANN的准确性相匹配。在这里,我们展示了SNN如何克服文献中发现的许多缺点,包括为动态“消失的尖峰问题”提供原则性解决方案,以优于所有现有的浅SNN并等于ANN的性能。它在使用无标记的数据和仅1/50的培训时期的1/50的过程中使用无监督的学习来完成此操作(标记的数据仅用于简单的线性读数层)。该结果使SNN成为可行的新方法,可通过未标记的数据快速,准确,有效,可解释和可重新支配机器学习。

The surge in interest in Artificial Intelligence (AI) over the past decade has been driven almost exclusively by advances in Artificial Neural Networks (ANNs). While ANNs set state-of-the-art performance for many previously intractable problems, the use of global gradient descent necessitates large datasets and computational resources for training, potentially limiting their scalability for real-world domains. Spiking Neural Networks (SNNs) are an alternative to ANNs that use more brain-like artificial neurons and can use local unsupervised learning to rapidly discover sparse recognizable features in the input data. SNNs, however, struggle with dynamical stability and have failed to match the accuracy of ANNs. Here we show how an SNN can overcome many of the shortcomings that have been identified in the literature, including offering a principled solution to the dynamical "vanishing spike problem", to outperform all existing shallow SNNs and equal the performance of an ANN. It accomplishes this while using unsupervised learning with unlabeled data and only 1/50th of the training epochs (labeled data is used only for a simple linear readout layer). This result makes SNNs a viable new method for fast, accurate, efficient, explainable, and re-deployable machine learning with unlabeled data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源