论文标题
视觉注意力网络
Visual Attention Network
论文作者
论文摘要
虽然最初是为自然语言处理任务而设计的,但自我发挥的机制最近逐渐席卷了各种计算机视觉领域。但是,图像的2D性质带来了在计算机视觉中应用自我注意力的三个挑战。 (1)将图像作为1D序列忽略了其2D结构。 (2)对于高分辨率图像而言,二次复杂性太昂贵了。 (3)它仅捕获空间适应性,但忽略了通道适应性。在本文中,我们提出了一种新颖的线性注意力,名为“大核心”(LKA),以使自适应和远程相关性在自我注意力中避免其缺点。此外,我们提出了一个基于LKA的神经网络,即视觉注意力网络(VAN)。虽然非常简单,但范超过了各种任务中的相似尺寸视觉变压器(VIT)和卷积神经网络(CNN),包括图像分类,对象检测,语义细分,泛型细分,姿势估计等。此外,VAN-B2超过Swin-T 4%MIOU(50.1 vs. 46.1),用于ADE20K基准上的语义分割,2.6%AP(48.8 vs. 46.2)在可可数据集上进行对象检测。它为社区提供了一种新颖的方法和简单而强大的基线。代码可在https://github.com/visual-crestention-network上找到。
While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel linear attention named large kernel attention (LKA) to enable self-adaptive and long-range correlations in self-attention while avoiding its shortcomings. Furthermore, we present a neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN surpasses similar size vision transformers(ViTs) and convolutional neural networks(CNNs) in various tasks, including image classification, object detection, semantic segmentation, panoptic segmentation, pose estimation, etc. For example, VAN-B6 achieves 87.8% accuracy on ImageNet benchmark and set new state-of-the-art performance (58.2 PQ) for panoptic segmentation. Besides, VAN-B2 surpasses Swin-T 4% mIoU (50.1 vs. 46.1) for semantic segmentation on ADE20K benchmark, 2.6% AP (48.8 vs. 46.2) for object detection on COCO dataset. It provides a novel method and a simple yet strong baseline for the community. Code is available at https://github.com/Visual-Attention-Network.