论文标题

不配对的图像增强,并具有质量注意的生成对抗网络

Unpaired Image Enhancement with Quality-Attention Generative Adversarial Network

论文作者

Ni, Zhangkai, Yang, Wenhan, Wang, Shiqi, Ma, Lin, Kwong, Sam

论文摘要

在这项工作中,我们旨在学习一个未配对的图像增强模型,该模型可以通过用户提供的高质量图像的特征来丰富低质量的图像。我们提出了一个基于双向生成对抗网络(GAN)培训的质量注意力生成对抗网络(Qagan),该数据嵌入了具有优质注意模块(QAM)的双向生成对抗网络(GAN)。所提出的Qagan的主要新颖性在于发电机的注入QAM,因此它直接从两个域中学习了与域相关的质量关注。更具体地说,所提出的QAM允许发电机从空间方面有效地选择与语义相关的特性,并分别从通道方面适应与样式相关的属性。因此,在我们提出的Qagan中,不仅歧视器,而且发电机还可以直接访问两个域,从而显着促进生成器学习映射功能。广泛的实验结果表明,与基于未配对学习的最新方法相比,我们提出的方法在客观和主观评估中都能提高性能。

In this work, we aim to learn an unpaired image enhancement model, which can enrich low-quality images with the characteristics of high-quality images provided by users. We propose a quality attention generative adversarial network (QAGAN) trained on unpaired data based on the bidirectional Generative Adversarial Network (GAN) embedded with a quality attention module (QAM). The key novelty of the proposed QAGAN lies in the injected QAM for the generator such that it learns domain-relevant quality attention directly from the two domains. More specifically, the proposed QAM allows the generator to effectively select semantic-related characteristics from the spatial-wise and adaptively incorporate style-related attributes from the channel-wise, respectively. Therefore, in our proposed QAGAN, not only discriminators but also the generator can directly access both domains which significantly facilitates the generator to learn the mapping function. Extensive experimental results show that, compared with the state-of-the-art methods based on unpaired learning, our proposed method achieves better performance in both objective and subjective evaluations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源