论文标题
MAXUP:改善神经网络训练的概括的简单方法
MaxUp: A Simple Way to Improve Generalization of Neural Network Training
论文作者
论文摘要
我们提出\ emph {maxup},这是一种令人尴尬的简单,高效的技术,可改善机器学习模型的概括性能,尤其是深神经网络。这个想法是生成一组带有一些随机扰动或转换的增强数据,并最大程度地减少增强数据的最大或最坏情况损失。通过这样做,我们暗中引入了对随机扰动的平稳性或鲁棒性正则化,从而改善了发电性能。例如,在高斯扰动的情况下, \ emph {maxup}渐近地等同于使用损失的梯度规范作为鼓励平稳性的惩罚。我们在一系列任务上测试\ emph {maxup},包括图像分类,语言建模和对抗认证,\ emph {maxup}始终优于现有的最佳基线方法,而无需引入实质性的计算开销。特别是,我们将Imagenet分类从最先进的TOP-1准确性$ 85.5 \%$改进,而没有额外的数据到$ 85.8 \%$。代码将很快发布。
We propose \emph{MaxUp}, an embarrassingly simple, highly effective technique for improving the generalization performance of machine learning models, especially deep neural networks. The idea is to generate a set of augmented data with some random perturbations or transforms and minimize the maximum, or worst case loss over the augmented data. By doing so, we implicitly introduce a smoothness or robustness regularization against the random perturbations, and hence improve the generation performance. For example, in the case of Gaussian perturbation, \emph{MaxUp} is asymptotically equivalent to using the gradient norm of the loss as a penalty to encourage smoothness. We test \emph{MaxUp} on a range of tasks, including image classification, language modeling, and adversarial certification, on which \emph{MaxUp} consistently outperforms the existing best baseline methods, without introducing substantial computational overhead. In particular, we improve ImageNet classification from the state-of-the-art top-1 accuracy $85.5\%$ without extra data to $85.8\%$. Code will be released soon.