论文标题

每个人都应该了解视觉变压器的三件事

Three things everyone should know about Vision Transformers

论文作者

Touvron, Hugo, Cord, Matthieu, El-Nouby, Alaaeldin, Verbeek, Jakob, Jégou, Hervé

论文摘要

在自然语言处理中最初的成功之后,变压器体系结构在计算机视觉中迅速获得了吸引力,为诸如图像分类,检测,细分和视频分析等任务提供了最新的结果。我们提供了三个基于易于实现的视觉变压器变体的见解。 (1)视力变压器的残留层通常会在某种程度上进行依次处理,可以在某种程度上并行处理,而不会明显影响准确性。 (2)对注意力层的重量进行微调足以使视觉变压器适应更高的分辨率和其他分类任务。这样可以节省计算,在微调时间减少峰值内存消耗,并允许在任务中共享大部分权重。 (3)添加基于MLP的补丁预处理层改善基于补丁掩模的BERT样的自我监督培训。我们使用Imagenet-1K数据集评估了这些设计选择的影响,并在Imagenet-V2测试集上确认我们的发现。转移性能是在六个较小的数据集中测量的。

After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源