论文标题
在深层卷积神经网络中学习稀疏过滤器,具有L1/L2伪标记
Learning Sparse Filters in Deep Convolutional Neural Networks with a l1/l2 Pseudo-Norm
论文作者
论文摘要
尽管事实证明,深层神经网络(DNN)对许多任务有效,但它们的记忆力和计算成本很高,因此使它们在资源有限的设备上不切实际。但是,已知这些网络包含大量参数。最近的研究表明,它们的结构可以更紧凑而不损害其性能。在本文中,我们根据滤波器系数定义的比率L1/L2伪标记介绍了稀疏性诱导正则化项。通过适当地为不同的过滤器内核定义该伪字符,并去除无关的过滤器,可以大大减少每一层中的内核数量,从而导致非常紧凑的深卷积神经网络(DCNN)结构。与许多现有方法不同,我们的方法不需要迭代的重新训练过程,并且使用此正则化项在训练过程中直接产生稀疏模型。此外,与现有方法相比,我们的方法也容易和简单实现。对MNIST和CIFAR-10的实验结果表明,我们的方法显着减少了经典模型(如Lenet和VGG)的过滤器数量,同时达到了与基线模型相同甚至更好的精度。此外,将稀疏性和准确性之间的权衡与基于L1或L2标准以及SSL,NISP和GAL方法的其他损失正规化条款进行了比较,并表明我们的方法表现优于它们。
While deep neural networks (DNNs) have proven to be efficient for numerous tasks, they come at a high memory and computation cost, thus making them impractical on resource-limited devices. However, these networks are known to contain a large number of parameters. Recent research has shown that their structure can be more compact without compromising their performance. In this paper, we present a sparsity-inducing regularization term based on the ratio l1/l2 pseudo-norm defined on the filter coefficients. By defining this pseudo-norm appropriately for the different filter kernels, and removing irrelevant filters, the number of kernels in each layer can be drastically reduced leading to very compact Deep Convolutional Neural Networks (DCNN) structures. Unlike numerous existing methods, our approach does not require an iterative retraining process and, using this regularization term, directly produces a sparse model during the training process. Furthermore, our approach is also much easier and simpler to implement than existing methods. Experimental results on MNIST and CIFAR-10 show that our approach significantly reduces the number of filters of classical models such as LeNet and VGG while reaching the same or even better accuracy than the baseline models. Moreover, the trade-off between the sparsity and the accuracy is compared to other loss regularization terms based on the l1 or l2 norm as well as the SSL, NISP and GAL methods and shows that our approach is outperforming them.