论文标题
零击超分辨率的元转移学习
Meta-Transfer Learning for Zero-Shot Super-Resolution
论文作者
论文摘要
卷积神经网络(CNN)通过使用大型外部样本显示了单图超分辨率(SISR)的显着改善。尽管它们基于外部数据集的表现出色,但它们仍无法利用特定图像中的内部信息。另一个问题是它们仅适用于其监督的数据的特定条件。例如,低分辨率(LR)图像应为高分辨率(HR)的“双色”下采样无噪声图像。为了解决这两个问题,已经提出了为灵活的内部学习提出的零击超分辨率(ZSSR)。但是,它们需要数千个梯度更新,即漫长的推理时间。在本文中,我们提出了利用ZSSR的零击超分辨率(MZSR)的元转移学习。确切地说,它基于找到适合内部学习的通用初始参数。因此,我们可以利用外部和内部信息,其中一个梯度更新可以产生相当大的结果。 (见图1)。使用我们的方法,网络可以快速适应给定的图像条件。在这方面,我们的方法可以应用于快速适应过程中的大量图像条件。
Convolutional neural networks (CNNs) have shown dramatic improvements in single image super-resolution (SISR) by using large-scale external samples. Despite their remarkable performance based on the external dataset, they cannot exploit internal information within a specific image. Another problem is that they are applicable only to the specific condition of data that they are supervised. For instance, the low-resolution (LR) image should be a "bicubic" downsampled noise-free image from a high-resolution (HR) one. To address both issues, zero-shot super-resolution (ZSSR) has been proposed for flexible internal learning. However, they require thousands of gradient updates, i.e., long inference time. In this paper, we present Meta-Transfer Learning for Zero-Shot Super-Resolution (MZSR), which leverages ZSSR. Precisely, it is based on finding a generic initial parameter that is suitable for internal learning. Thus, we can exploit both external and internal information, where one single gradient update can yield quite considerable results. (See Figure 1). With our method, the network can quickly adapt to a given image condition. In this respect, our method can be applied to a large spectrum of image conditions within a fast adaptation process.