论文标题
通过生成对抗网络无监督的深度图像增强
Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network
论文作者
论文摘要
提高图像的审美质量对公众来说是挑战性的。为了解决此问题,大多数现有的算法基于有监督的学习方法,以学习配对数据的自动照片增强器,该数据由低质量的照片和相应的专家搭配版本组成。但是,专家润饰的照片的样式和特征可能无法满足普通用户的需求或偏好。在本文中,我们提出了一个无监督的图像增强生成对抗网络(UEGAN),该网络从一组具有无监督的特征的图像中学习相应的图像到图像映射,而不是在大量配对图像上学习。所提出的模型基于单个深gan,该基因嵌入了调制和注意机制以捕获更丰富的全球和局部特征。 Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a L2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics.定量和定性结果都表明,所提出的模型有效地提高了图像的美学质量。我们的代码可在以下网址提供:https://github.com/eezkni/uegan。
Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a L2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images. Our code is available at: https://github.com/eezkni/UEGAN.