论文标题
n维中的折纸:前馈网络如何生产线性可分离性
Origami in N dimensions: How feed-forward networks manufacture linear separability
论文作者
论文摘要
神经网络可以实施任意功能。但是,从机械上讲,构建目标的工具是什么?对于分类任务,网络必须将数据类转换为最终隐藏层中可分开的表示形式。我们表明,馈送前进体系结构具有一个主要工具,可以实现这种可分离性:在未占用的较高尺寸中逐步折叠数据歧管。折叠的操作在低维度中提供了有用的直觉,该直觉将概括为高度。我们认为,一种基于剪切的替代方法,需要非常深的体系结构,在现实世界网络中仅发挥很小的作用。但是,只要图层比数据维度宽,折叠操作就可以强大,从而通过提供对分布中任意区域的访问(例如在其他类中形成一个类群岛的数据点)来允许有效的解决方案。我们认为,Relu网络中的通用近似属性与折叠定理(Demaine等,1998)之间存在一个链接。基于机械洞察力,我们预测,可分离性的逐渐产生必然伴随着神经元,表现出混合的选择性和双峰调节曲线。这是在接受扑克手部任务训练的网络中验证的,显示了训练过程中双峰调谐曲线的出现。我们希望我们对深层网络中数据转换的直观图景可以帮助提供可解释性,并讨论卷积网络,损失景观和概括的可能应用。 tl; dr:表明,深层网络的内部处理可以被认为是N维激活空间中数据分布的字面折叠操作。提供了与折纸理论中知名定理的链接。
Neural networks can implement arbitrary functions. But, mechanistically, what are the tools at their disposal to construct the target? For classification tasks, the network must transform the data classes into a linearly separable representation in the final hidden layer. We show that a feed-forward architecture has one primary tool at hand to achieve this separability: progressive folding of the data manifold in unoccupied higher dimensions. The operation of folding provides a useful intuition in low-dimensions that generalizes to high ones. We argue that an alternative method based on shear, requiring very deep architectures, plays only a small role in real-world networks. The folding operation, however, is powerful as long as layers are wider than the data dimensionality, allowing efficient solutions by providing access to arbitrary regions in the distribution, such as data points of one class forming islands within the other classes. We argue that a link exists between the universal approximation property in ReLU networks and the fold-and-cut theorem (Demaine et al., 1998) dealing with physical paper folding. Based on the mechanistic insight, we predict that the progressive generation of separability is necessarily accompanied by neurons showing mixed selectivity and bimodal tuning curves. This is validated in a network trained on the poker hand task, showing the emergence of bimodal tuning curves during training. We hope that our intuitive picture of the data transformation in deep networks can help to provide interpretability, and discuss possible applications to the theory of convolutional networks, loss landscapes, and generalization. TL;DR: Shows that the internal processing of deep networks can be thought of as literal folding operations on the data distribution in the N-dimensional activation space. A link to a well-known theorem in origami theory is provided.