论文标题

当图像分解符合深度学习时:一种新颖的红外且可见的图像融合方法

When Image Decomposition Meets Deep Learning: A Novel Infrared and Visible Image Fusion Method

论文作者

Zhao, Zixiang, Zhang, Jiangshe, Xu, Shuang, Sun, Kai, Zhang, Chunxia, Liu, Junmin

论文摘要

红外且可见的图像融合是图像处理和图像增强的热门话题,旨在产生融合的图像,以保留可见图像中的细节纹理信息以及红外图像中的热辐射信息。这个问题的关键步骤是分解不同尺度的特征并分别合并它们。在本文中,我们提出了一个新型的基于双流动自动编码器(AE)的融合网络。核心思想是,编码器分别将图像分解为基础和细节特征图,分别具有低频和高频信息,并且解码器负责原始图像重建。为此,建立了精心设计的损失函数,以使基本/详细信息图相似/不同。在测试阶段,基础和细节特征图分别通过附加的融合层合并,该融合层包含一个基于显着加权的空间注意模块和通道注意模块,以适应从源图像中保留更多信息并突出对象。然后,解码器恢复了融合图像。定性和定量结果表明,我们的方法可以生成包含突出显示目标的融合图像和具有强可重复性的丰富细节纹理信息,同时优于最先进的方法(SOTA)方法。

Infrared and visible image fusion, as a hot topic in image processing and image enhancement, aims to produce fused images retaining the detail texture information in visible images and the thermal radiation information in infrared images. A critical step for this issue is to decompose features in different scales and to merge them separately. In this paper, we propose a novel dual-stream auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into base and detail feature maps with low- and high-frequency information, respectively, and that the decoder is responsible for the original image reconstruction. To this end, a well-designed loss function is established to make the base/detail feature maps similar/dissimilar. In the test phase, base and detail feature maps are respectively merged via an additional fusion layer, which contains a saliency weighted-based spatial attention module and a channel attention module to adaptively preserve more information from source images and to highlight the objects. Then the fused image is recovered by the decoder. Qualitative and quantitative results demonstrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong reproducibility and meanwhile is superior to the state-of-the-art (SOTA) approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源