论文标题
功能级增强以提高深神经网络的鲁棒性来仿射转换
Feature-level augmentation to improve robustness of deep neural networks to affine transformations
论文作者
论文摘要
最近的研究表明,卷积神经网络并不能很好地推广到小图像转换,例如旋转几个度或几个像素的翻译。为了提高这种转换的鲁棒性,我们建议除了应用于输入图像的常见数据增强外,还要在神经结构的中间层上引入数据增强。通过在各个级别的激活图(特征)上引入小小的扰动,我们开发了神经网络应对这种转换的能力。我们考虑了两个不同的卷积架构(Resnet-18和Densenet-121),对三个图像分类基准(Tiny Imagenet,Caltech-256和Food-101)进行了实验。与两种最先进的稳定方法相比,经验结果表明,我们的方法始终达到准确性和平均翻转率之间的最佳权衡。
Recent studies revealed that convolutional neural networks do not generalize well to small image transformations, e.g. rotations by a few degrees or translations of a few pixels. To improve the robustness to such transformations, we propose to introduce data augmentation at intermediate layers of the neural architecture, in addition to the common data augmentation applied on the input images. By introducing small perturbations to activation maps (features) at various levels, we develop the capacity of the neural network to cope with such transformations. We conduct experiments on three image classification benchmarks (Tiny ImageNet, Caltech-256 and Food-101), considering two different convolutional architectures (ResNet-18 and DenseNet-121). When compared with two state-of-the-art stabilization methods, the empirical results show that our approach consistently attains the best trade-off between accuracy and mean flip rate.