论文标题
SelectScale:通过选择性和软辍学的图像从图像中挖掘更多模式
SelectScale: Mining More Patterns from Images via Selective and Soft Dropout
论文作者
论文摘要
卷积神经网络(CNN)在图像识别方面取得了巨大的成功。尽管CNN有效地学习了输入图像的内部模式,但这些模式仅构成了输入图像中包含的一小部分有用模式。这可以归因于以下事实:CNN将停止学习是否足以进行正确的分类。辍学和spatialldropout等网络正规化方法可以缓解此问题。在培训期间,它们随机删除功能。本质上,这些辍学方法改变了网络所学的模式,进而迫使网络学习其他模式以进行正确的分类。但是,上述方法具有重要的缺点。随机掉落的功能通常效率低下,并且可能引入不必要的噪声。为了解决这个问题,我们提出了SelectScale。 SelectScale不是随机放下单元,而是在训练过程中选择网络中的重要功能并调整它们。使用SelectScale,我们提高了CNN在CIFAR和Imagenet上的性能。
Convolutional neural networks (CNNs) have achieved remarkable success in image recognition. Although the internal patterns of the input images are effectively learned by the CNNs, these patterns only constitute a small proportion of useful patterns contained in the input images. This can be attributed to the fact that the CNNs will stop learning if the learned patterns are enough to make a correct classification. Network regularization methods like dropout and SpatialDropout can ease this problem. During training, they randomly drop the features. These dropout methods, in essence, change the patterns learned by the networks, and in turn, forces the networks to learn other patterns to make the correct classification. However, the above methods have an important drawback. Randomly dropping features is generally inefficient and can introduce unnecessary noise. To tackle this problem, we propose SelectScale. Instead of randomly dropping units, SelectScale selects the important features in networks and adjusts them during training. Using SelectScale, we improve the performance of CNNs on CIFAR and ImageNet.