论文标题
降低神经种族:封闭网络中抽象的动态
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
论文作者
论文摘要
我们对深度学习的理论理解并没有与其经验成功保持同步。尽管已知网络体系结构至关重要,但我们尚不了解其对学习的表示和网络行为的影响,或者该体系结构如何反映任务结构。在这项工作中,我们开始通过引入门控的深层线性网络框架来解决这一差距,该网络框架如何触发信息流动的途径如何影响体系结构内的学习动态。至关重要的是,由于门控,这些网络可以计算其输入的非线性函数。我们得出了精确的减少,并且在某些情况下,我们可以确切解决学习动力学的方法。我们的分析表明,结构化网络中的学习动态可以概念化为具有隐性偏见的神经种族,然后控制该模型的系统概括,多任务和转移的能力。我们通过自然主义数据集并使用轻松的假设来验证我们的关键见解。综上所述,我们的工作提出了将神经体系结构与学习有关的一般假设,并提供了一种数学方法,以理解更复杂的体系结构的设计以及模块化和组成性在解决现实世界中问题中的作用。代码和结果可在https://www.saxelab.org/gated-dln上找到。
Our theoretical understanding of deep learning has not kept pace with its empirical success. While network architecture is known to be critical, we do not yet understand its effect on learned representations and network behavior, or how this architecture should reflect task structure.In this work, we begin to address this gap by introducing the Gated Deep Linear Network framework that schematizes how pathways of information flow impact learning dynamics within an architecture. Crucially, because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. Our analysis demonstrates that the learning dynamics in structured networks can be conceptualized as a neural race with an implicit bias towards shared representations, which then govern the model's ability to systematically generalize, multi-task, and transfer. We validate our key insights on naturalistic datasets and with relaxed assumptions. Taken together, our work gives rise to general hypotheses relating neural architecture to learning and provides a mathematical approach towards understanding the design of more complex architectures and the role of modularity and compositionality in solving real-world problems. The code and results are available at https://www.saxelab.org/gated-dln .