论文标题
低维流形支持复发性神经网络中的多重整合
Low-Dimensional Manifolds Support Multiplexed Integrations in Recurrent Neural Networks
论文作者
论文摘要
我们研究了在经过整合一个或多个时间信号训练的复发神经网络中出现的学习动力和表示。结合了分析和数值研究,我们表征了具有N神经元的RNN学会整合任意持续时间的d(n)标量信号的条件。我们表明,对于线性和恢复神经元,其内部状态的生存接近D维歧管,其形状与激活函数有关。因此,每个神经元在各个程度上都传递了有关所有积分值的信息。我们讨论了我们的结果与计算神经科学家锻造的混合选择性概念之间的深刻类比,以解释皮质记录。
We study the learning dynamics and the representations emerging in Recurrent Neural Networks trained to integrate one or multiple temporal signals. Combining analytical and numerical investigations, we characterize the conditions under which a RNN with n neurons learns to integrate D(n) scalar signals of arbitrary duration. We show, both for linear and ReLU neurons, that its internal state lives close to a D-dimensional manifold, whose shape is related to the activation function. Each neuron therefore carries, to various degrees, information about the value of all integrals. We discuss the deep analogy between our results and the concept of mixed selectivity forged by computational neuroscientists to interpret cortical recordings.