论文标题

统一的视觉变压器压缩

Unified Visual Transformer Compression

论文作者

Yu, Shixing, Chen, Tianlong, Shen, Jiayi, Yuan, Huan, Tan, Jianchao, Yang, Sen, Liu, Ji, Wang, Zhangyang

论文摘要

视觉变形金刚(VIT)最近越来越受欢迎。即使没有定制的图像操作员,例如卷积,VIT也可以在适当的大量数据训练时产生竞争性能。但是,由于堆叠多头自我注意模块,VIT的计算开销仍然令人望而却步。与巨大的文献和在压缩卷积神经网络方面取得成功的成功相比,视觉变压器压缩的研究也刚刚出现,现有作品着重于压缩的一个或两个方面。本文提出了一个统一的VIT压缩框架,该框架无缝地组装了三种有效的技术:修剪,跳过和知识蒸馏。我们在蒸馏损失下制定了一个预算受限的端到端优化框架,针对共同学习模型权重,层次修剪比/掩模和跳过配置。然后,使用原始二重式算法解决了优化问题。实验是用几种VIT变体进行的,例如ImageNet数据集上的DEIT和T2T-VIT主干,我们的方法始终优于最近的竞争对手。例如,几乎可以将Deit微型缩小到原始拖鞋的50 \%,而不会丢失精度。代码可在线提供:〜\ url {https://github.com/vita-group/uvc}。

Vision transformers (ViTs) have gained popularity recently. Even without customized image operators such as convolutions, ViTs can yield competitive performance when properly trained on massive data. However, the computational overhead of ViTs remains prohibitive, due to stacking multi-head self-attention modules and else. Compared to the vast literature and prevailing success in compressing convolutional neural networks, the study of Vision Transformer compression has also just emerged, and existing works focused on one or two aspects of compression. This paper proposes a unified ViT compression framework that seamlessly assembles three effective techniques: pruning, layer skipping, and knowledge distillation. We formulate a budget-constrained, end-to-end optimization framework, targeting jointly learning model weights, layer-wise pruning ratios/masks, and skip configurations, under a distillation loss. The optimization problem is then solved using the primal-dual algorithm. Experiments are conducted with several ViT variants, e.g. DeiT and T2T-ViT backbones on the ImageNet dataset, and our approach consistently outperforms recent competitors. For example, DeiT-Tiny can be trimmed down to 50\% of the original FLOPs almost without losing accuracy. Codes are available online:~\url{https://github.com/VITA-Group/UVC}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源