论文标题

学习通过增强学习来修剪深度神经网络

Learning to Prune Deep Neural Networks via Reinforcement Learning

论文作者

Gupta, Manas, Aravindan, Siddharth, Kalisz, Aleksandra, Chandrasekhar, Vijay, Jie, Lin

论文摘要

本文提出了Purl-基于修剪神经网络的基于深的增强学习(RL)算法。与当前基于RL的模型压缩方法不同,只有在每个情节结束时给出反馈给代理商,Purl在每个修剪步骤中都会提供奖励。这使Purl能够实现与当前最新方法相当的稀疏性和准确性,同时具有更短的训练周期。 PURL在Resnet-50模型上达到了80%以上的稀疏性,同时在Imagenet数据集上保留了75.37%的前1个精度。通过我们的实验,我们表明Purl还能够稀疏Mobilenet-V2(例如Mobilenet-v2)已经有效的架构。除了性能表征实验外,我们还提供了对Markov决策过程中的各种RL设计选择的讨论和分析。最后,我们指出purl易于使用,并且很容易适应各种体系结构。

This paper proposes PuRL - a deep reinforcement learning (RL) based algorithm for pruning neural networks. Unlike current RL based model compression approaches where feedback is given only at the end of each episode to the agent, PuRL provides rewards at every pruning step. This enables PuRL to achieve sparsity and accuracy comparable to current state-of-the-art methods, while having a much shorter training cycle. PuRL achieves more than 80% sparsity on the ResNet-50 model while retaining a Top-1 accuracy of 75.37% on the ImageNet dataset. Through our experiments we show that PuRL is also able to sparsify already efficient architectures like MobileNet-V2. In addition to performance characterisation experiments, we also provide a discussion and analysis of the various RL design choices that went into the tuning of the Markov Decision Process underlying PuRL. Lastly, we point out that PuRL is simple to use and can be easily adapted for various architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源