论文标题
轻巧超级分辨率的自校准有效变压器
Self-Calibrated Efficient Transformer for Lightweight Super-Resolution
论文作者
论文摘要
最近,深度学习已成功地应用于具有出色表现的单形象超分辨率(SISR)。但是,大多数现有的方法都集中在建立一个更复杂的网络,并具有大量层,这可能需要大量的计算成本和存储器存储。为了解决这个问题,我们提出了一个轻巧的自校准有效变压器(SCET)网络,以解决此问题。 CET的结构主要由自校准的模块和有效的变压器块组成,其中自校准的模块采用像素注意机制有效提取图像特征。为了进一步从功能中利用上下文信息,我们采用有效的变压器来帮助网络在长距离内获得类似的功能,从而恢复足够的纹理细节。我们在整个网络的不同设置上提供了全面的结果。我们提出的方法比基线方法更具出色的性能。源代码和预训练模型可在https://github.com/alexzou14/scet上找到。
Recently, deep learning has been successfully applied to the single-image super-resolution (SISR) with remarkable performance. However, most existing methods focus on building a more complex network with a large number of layers, which can entail heavy computational costs and memory storage. To address this problem, we present a lightweight Self-Calibrated Efficient Transformer (SCET) network to solve this problem. The architecture of SCET mainly consists of the self-calibrated module and efficient transformer block, where the self-calibrated module adopts the pixel attention mechanism to extract image features effectively. To further exploit the contextual information from features, we employ an efficient transformer to help the network obtain similar features over long distances and thus recover sufficient texture details. We provide comprehensive results on different settings of the overall network. Our proposed method achieves more remarkable performance than baseline methods. The source code and pre-trained models are available at https://github.com/AlexZou14/SCET.