论文标题

基于通道和过滤器之间角度差异的修剪方法

A pruning method based on the dissimilarity of angle among channels and filters

论文作者

Yao, Jiayi, Li, Ping, Kang, Xiatao, Wang, Yuzhe

论文摘要

卷积神经网络(CNN)在各种文件中越来越广泛地使用,其计算和记忆需求也正在显着增加。为了使其适用于有限条件(例如嵌入式应用程序),将出现网络压缩。其中,研究人员更加关注网络修剪。在本文中,我们编码卷积网络以获得不同编码节点的相似性,并根据相似性评估卷积内核之间的连接功率。然后根据不同的连接功率施加不同的惩罚水平。同时,我们在角度的差异(DACP)上提出了通道修剪基础。首先,我们通过GL惩罚训练一个稀疏的模型,并对卷积网络的通道和过滤器施加角度差异约束,以获得更稀疏的结构。最终,在实验部分中证明了我们方法的有效性。在CIFAR-10上,我们在修剪后的精度为93.31%的VGG-16上降低了66.86%的拖曳,其中Flops代表了模型的每秒浮点操作的数量。此外,在Resnet-32上,我们将拖鞋降低了58.46%,这使修剪后的准确性达到91.76%。

Convolutional Neural Network (CNN) is more and more widely used in various fileds, and its computation and memory-demand are also increasing significantly. In order to make it applicable to limited conditions such as embedded application, network compression comes out. Among them, researchers pay more attention to network pruning. In this paper, we encode the convolution network to obtain the similarity of different encoding nodes, and evaluate the connectivity-power among convolutional kernels on the basis of the similarity. Then impose different level of penalty according to different connectivity-power. Meanwhile, we propose Channel Pruning base on the Dissimilarity of Angle (DACP). Firstly, we train a sparse model by GL penalty, and impose an angle dissimilarity constraint on the channels and filters of convolutional network to obtain a more sparse structure. Eventually, the effectiveness of our method is demonstrated in the section of experiment. On CIFAR-10, we reduce 66.86% FLOPs on VGG-16 with 93.31% accuracy after pruning, where FLOPs represents the number of floating-point operations per second of the model. Moreover, on ResNet-32, we reduce FLOPs by 58.46%, which makes the accuracy after pruning reach 91.76%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源