论文标题
使用像素注意的有效图像超分辨率
Efficient Image Super-Resolution Using Pixel Attention
论文作者
论文摘要
这项工作旨在设计用于图像超级分辨率(SR)的轻量级卷积神经网络。考虑到简单,我们使用新提出的像素注意方案构建了一个非常简洁有效的网络。像素的注意力(PA)与通道的注意力和配方中的空间注意力相似。区别在于,PA会产生3D注意图,而不是1D注意向量或2D图。这种注意力方案引入了更少的其他参数,但会产生更好的SR结果。在PA的基础上,我们分别为主分支和重建分支提出了两个构件。第一个-SC-PA块具有与自校准的卷积相同的结构,但带有我们的PA层。对于其Twobranch架构和注意力方案,该块比常规的残差/致密块要高得多。而第二个 - UPA块结合了最近的邻居采样,卷积和PA层。它以几乎没有参数成本来提高最终重建质量。我们的最终模型可以实现与轻量级网络(Srresnet和Carn)相似的性能,但只有272K参数(占SRRESNET的17.92%,Carn的17.09%)。每个提出的组件的有效性也通过消融研究验证。该代码可在https://github.com/zhaohengyuan1/pan上找到。
This work aims at designing a lightweight convolutional neural network for image super resolution (SR). With simplicity bare in mind, we construct a pretty concise and effective network with a newly proposed pixel attention scheme. Pixel attention (PA) is similar as channel attention and spatial attention in formulation. The difference is that PA produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results. On the basis of PA, we propose two building blocks for the main branch and the reconstruction branch, respectively. The first one - SC-PA block has the same structure as the Self-Calibrated convolution but with our PA layer. This block is much more efficient than conventional residual/dense blocks, for its twobranch architecture and attention scheme. While the second one - UPA block combines the nearest-neighbor upsampling, convolution and PA layers. It improves the final reconstruction quality with little parameter cost. Our final model- PAN could achieve similar performance as the lightweight networks - SRResNet and CARN, but with only 272K parameters (17.92% of SRResNet and 17.09% of CARN). The effectiveness of each proposed component is also validated by ablation study. The code is available at https://github.com/zhaohengyuan1/PAN.