论文标题
重置,神经台面和CT-RNN是特定的神经调节网络
ResNets, NeuralODEs and CT-RNNs are Particular Neural Regulatory Networks
论文作者
论文摘要
本文表明,重置,神经台和CT-RNN是特定的神经调节网络(NRNS),这是一种在小物种(例如C.Elegans nematode)中遇到的非上皮神经元的生物物理模型,以及大物种的视网膜。与重新NRE相比,神经台面和CT-RNN,NRN在其突触计算中具有附加的乘法项,从而使它们适应了每个特定的输入。这种额外的灵活性使NRNS $ m $ $ $ $ $ $倍,比神经台面和CT-RNNS,其中$ m $与训练集的大小成正比。此外,由于神经台面和CT-RNN的简洁$ n $ $倍,其中$ n $是计算给定输入$ x $的输出$ f(x)$所需的集成步骤数,NRNS总计为$ m \,{\ cdot} \,n $比第一个$更加本能。对于给定的近似任务,这种相当的简洁性可以学习一个非常小的,因此可以理解的NRN,其行为可以用良好的建筑主题来解释,即NRN与基因调节网络共享,例如激活,抑制,顺序化,相互排除和同步。据我们所知,本文首次将主流在深层神经网络上与生物学和神经科学领域的主流统一。
This paper shows that ResNets, NeuralODEs, and CT-RNNs, are particular neural regulatory networks (NRNs), a biophysical model for the nonspiking neurons encountered in small species, such as the C.elegans nematode, and in the retina of large species. Compared to ResNets, NeuralODEs and CT-RNNs, NRNs have an additional multiplicative term in their synaptic computation, allowing them to adapt to each particular input. This additional flexibility makes NRNs $M$ times more succinct than NeuralODEs and CT-RNNs, where $M$ is proportional to the size of the training set. Moreover, as NeuralODEs and CT-RNNs are $N$ times more succinct than ResNets, where $N$ is the number of integration steps required to compute the output $F(x)$ for a given input $x$, NRNs are in total $M\,{\cdot}\,N$ more succinct than ResNets. For a given approximation task, this considerable succinctness allows to learn a very small and therefore understandable NRN, whose behavior can be explained in terms of well established architectural motifs, that NRNs share with gene regulatory networks, such as, activation, inhibition, sequentialization, mutual exclusion, and synchronization. To the best of our knowledge, this paper unifies for the first time the mainstream work on deep neural networks with the one in biology and neuroscience in a quantitative fashion.