论文标题

动态稀疏训练:与可训练的蒙版层从头开始找到高效的稀疏网络

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers

论文作者

Liu, Junjie, Xu, Zhe, Shi, Runbin, Cheung, Ray C. C., So, Hayden K. H.

论文摘要

我们提出了一种称为动态稀疏训练的新型网络修剪算法,该算法可以在统一的优化过程中共同找到最佳的网络参数和稀疏网络结构,并具有可训练的修剪阈值。这些阈值可以通过反向传播动态地进行细粒度的层调整。我们证明,我们的动态稀疏训练算法可以轻松地训练非常稀疏的神经网络模型,而使用与密集模型相同数量的训练时期的性能损失很小。与各种网络体系结构上的其他稀疏培训算法相比,动态稀疏训练可以实现最先进的性能。此外,我们还有一些令人惊讶的观察结果,可以为我们的算法的有效性和效率提供有力的证据。这些观察结果揭示了传统的三阶段修剪算法的潜在问题,并介绍了我们算法为更紧凑的网络体系结构设计提供的潜在指导。

We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same number of training epochs as dense models. Dynamic Sparse Training achieves the state of the art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence for the effectiveness and efficiency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源