论文标题
分批归一化解释了
Batch Normalization Explained
论文作者
论文摘要
在现代深层网络(DNS)中,至关重要的,无处不在且知之甚少的成分是批处理(BN),它以特征图为中心并归一化。迄今为止,只有有限的进步才能理解为什么BN会提高DN学习和推理表现。工作专注于表明BN平滑DN的损失格局。在本文中,我们从函数近似的角度从理论上研究BN。我们利用这样一个事实,即当今最先进的DNS是连续的分段仿射(CPA),可以通过定义在输入空间的分区(所谓的“线性区域”)上定义的仿射映射来预测训练数据的预测因子。 {\ em我们证明了BN是一种无监督的学习技术,独立于DN的权重或基于梯度的学习 - 适应DN的样条分区的几何形状以匹配数据。} BN提供了“智能初始化”,可促进DN学习的性能,以增强DN的最初量为SPLINE的SPLINE分配,以使数据分配为SPLINE分配,以使其分配为SPLINE分配。我们还表明,小型批次之间的BN统计数据的变化引入了分区边界的辍学样随机扰动,从而导致分类问题的决策边界。每次微型摄入扰动可通过增加训练样本和决策界限之间的余量来减少过度拟合并改善概括。
A critically important, ubiquitous, and yet poorly understood ingredient in modern deep networks (DNs) is batch normalization (BN), which centers and normalizes the feature maps. To date, only limited progress has been made understanding why BN boosts DN learning and inference performance; work has focused exclusively on showing that BN smooths a DN's loss landscape. In this paper, we study BN theoretically from the perspective of function approximation; we exploit the fact that most of today's state-of-the-art DNs are continuous piecewise affine (CPA) splines that fit a predictor to the training data via affine mappings defined over a partition of the input space (the so-called "linear regions"). {\em We demonstrate that BN is an unsupervised learning technique that -- independent of the DN's weights or gradient-based learning -- adapts the geometry of a DN's spline partition to match the data.} BN provides a "smart initialization" that boosts the performance of DN learning, because it adapts even a DN initialized with random weights to align its spline partition with the data. We also show that the variation of BN statistics between mini-batches introduces a dropout-like random perturbation to the partition boundaries and hence the decision boundary for classification problems. This per mini-batch perturbation reduces overfitting and improves generalization by increasing the margin between the training samples and the decision boundary.