论文标题
U2NET:具有空间光谱综合双U-NET的一般框架用于图像融合
U2Net: A General Framework with Spatial-Spectral-Integrated Double U-Net for Image Fusion
论文作者
论文摘要
在图像融合任务中,从不同来源获得的图像表现出不同的特性。因此,用单分支网络统一处理它们可能导致特征提取不足。此外,许多工作表明,多尺度网络比像素级计算机视觉问题中的单个模型相比,捕获信息更足够。考虑到这些因素,我们提出了U2NET,这是一种用于图像融合的空间光谱综合u形网络。 U2NET利用空间U-NET和光谱U-NET来提取空间细节和光谱特征,从而允许从不同图像中对特征进行判别和分层学习。与大多数仅利用串联合并空间和光谱信息的以前的作品相反,本文引入了一种新型的空间 - 光谱整合结构,称为S2Block,该结构以逻辑有效的方式结合了来自不同来源的特征图。我们对两个图像融合任务进行了一系列实验,包括遥感pansharpening和高光谱图像超分辨率(HISR)。在定量和定性评估中,U2NET优于代表性的最先进(SOTA)方法,证明了我们方法的优势。该代码可在https://github.com/psrben/u2net上找到。
In image fusion tasks, images obtained from different sources exhibit distinct properties. Consequently, treating them uniformly with a single-branch network can lead to inadequate feature extraction. Additionally, numerous works have demonstrated that multi-scaled networks capture information more sufficiently than single-scaled models in pixel-level computer vision problems. Considering these factors, we propose U2Net, a spatial-spectral-integrated double U-shape network for image fusion. The U2Net utilizes a spatial U-Net and a spectral U-Net to extract spatial details and spectral characteristics, which allows for the discriminative and hierarchical learning of features from diverse images. In contrast to most previous works that merely employ concatenation to merge spatial and spectral information, this paper introduces a novel spatial-spectral integration structure called S2Block, which combines feature maps from different sources in a logical and effective way. We conduct a series of experiments on two image fusion tasks, including remote sensing pansharpening and hyperspectral image super-resolution (HISR). The U2Net outperforms representative state-of-the-art (SOTA) approaches in both quantitative and qualitative evaluations, demonstrating the superiority of our method. The code is available at https://github.com/PSRben/U2Net.