论文标题

使用最佳客观估计的面向感知的单图像超分辨率

Perception-Oriented Single Image Super-Resolution using Optimal Objective Estimation

论文作者

Park, Seung Ho, Moon, Young Su, Cho, Nam Ik

论文摘要

与经过面向失真损失的网络(例如L1或L2)相比,接受感知和对抗性损失训练的单像超分辨率(SISR)网络可提供高对比度的输出。但是,已经表明,使用单个感知损失不足以准确恢复图像中局部变化的各种形状,通常会产生不良的文物或不自然的细节。因此,已经尝试了各种损失的组合,例如感知,对抗和失真损失,但是找到最佳组合仍然具有挑战性。因此,在本文中,我们提出了一个新的SISR框架,该框架适用于每个区域的最佳目标,以在高分辨率输出的整个领域中产生合理的结果。具体而言,该框架包括两个模型:一个预测模型,该模型侵入给定低分辨率(LR)输入的最佳目标图和应用目标目标映射以生成相应的SR输出的生成模型。对我们提出的目标轨迹训练了生成模型,该目标轨迹代表一组基本目标,该目标使单个网络能够学习各种SR结果,这些结果与轨迹上的组合损失相对应。使用对从目标轨迹搜索的相应的最佳目标地图对预测模型进行了训练。五个基准的实验结果表明,该方法在LPIPS,DISTS,PSNR和SSIM指标中优于最先进的感知驱动的SR方法。视觉结果还证明了我们方法在面向感知的重建中的优越性。代码和型号可在https://github.com/seungho-snu/srooe中找到。

Single-image super-resolution (SISR) networks trained with perceptual and adversarial losses provide high-contrast outputs compared to those of networks trained with distortion-oriented losses, such as L1 or L2. However, it has been shown that using a single perceptual loss is insufficient for accurately restoring locally varying diverse shapes in images, often generating undesirable artifacts or unnatural details. For this reason, combinations of various losses, such as perceptual, adversarial, and distortion losses, have been attempted, yet it remains challenging to find optimal combinations. Hence, in this paper, we propose a new SISR framework that applies optimal objectives for each region to generate plausible results in overall areas of high-resolution outputs. Specifically, the framework comprises two models: a predictive model that infers an optimal objective map for a given low-resolution (LR) input and a generative model that applies a target objective map to produce the corresponding SR output. The generative model is trained over our proposed objective trajectory representing a set of essential objectives, which enables the single network to learn various SR results corresponding to combined losses on the trajectory. The predictive model is trained using pairs of LR images and corresponding optimal objective maps searched from the objective trajectory. Experimental results on five benchmarks show that the proposed method outperforms state-of-the-art perception-driven SR methods in LPIPS, DISTS, PSNR, and SSIM metrics. The visual results also demonstrate the superiority of our method in perception-oriented reconstruction. The code and models are available at https://github.com/seungho-snu/SROOE.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源