论文标题
DeHaze-Glcgan:通过对抗训练未配对的单图像脱落
Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training
论文作者
论文摘要
单图像hazing是一个具有挑战性的问题,它远未解决。当前的大多数解决方案都需要配对的图像数据集,其中包括朦胧的图像及其相应的无雾化地面图像。但是,实际上,照明条件和其他因素可能会产生一系列无吊布的图像,这些图像可以作为朦胧图像的地面真理,而单个地面真相图像无法捕获该范围。这限制了现实世界应用程序中配对图像数据集的可扩展性和实用性。在本文中,我们专注于未配对的单图像脱落,并且不依赖地面真相图像或物理散射模型。我们将图像降低的问题减少到图像对图像翻译问题,并提出一个飞去的全球 - 本地周期一致生成对抗网络(DeHaze-Glcgan)。 DeHaze-Glcgan的发电机网络将编码器架构结构与残留块相结合,以更好地恢复无雾的场景。我们还采用了全球本地歧视者结构来处理空间变化的雾兹。通过消融研究,我们证明了不同因素在提议网络性能中的有效性。我们对三个基准数据集进行的广泛实验表明,与其他方法相比,我们的网络在接受PSNR和SSIM方面优于先前的工作,而对较小的数据进行了培训。
Single image de-hazing is a challenging problem, and it is far from solved. Most current solutions require paired image datasets that include both hazy images and their corresponding haze-free ground-truth images. However, in reality, lighting conditions and other factors can produce a range of haze-free images that can serve as ground truth for a hazy image, and a single ground truth image cannot capture that range. This limits the scalability and practicality of paired image datasets in real-world applications. In this paper, we focus on unpaired single image de-hazing and we do not rely on the ground truth image or physical scattering model. We reduce the image de-hazing problem to an image-to-image translation problem and propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN). Generator network of Dehaze-GLCGAN combines an encoder-decoder architecture with residual blocks to better recover the haze free scene. We also employ a global-local discriminator structure to deal with spatially varying haze. Through ablation study, we demonstrate the effectiveness of different factors in the performance of the proposed network. Our extensive experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM while being trained on smaller amount of data compared to other methods.