论文标题
带有gan的双融合语义分割框架用于SAR图像
A Dual-fusion Semantic Segmentation Framework With GAN For SAR Images
论文作者
论文摘要
基于深度学习的语义分割是遥感图像分割中流行的方法之一。在本文中,提出了基于广泛使用的EncoderDecoder架构的网络来完成合成的光圈雷达(SAR)图像分割。具有光学图像的更好表示能力,我们建议通过通过众多SAR和光学图像训练的生成逆向网络(GAN)以生成的光学图像来丰富SAR图像。这些光学图像可以用作原始SAR图像的扩展,从而确保分割的可靠结果。然后将GAN生成的光学图像与相应的真实图像缝合在一起。缝合数据后的注意模块用于加强对象的表示。实验表明,与其他常用方法相比,我们的方法有效
Deep learning based semantic segmentation is one of the popular methods in remote sensing image segmentation. In this paper, a network based on the widely used encoderdecoder architecture is proposed to accomplish the synthetic aperture radar (SAR) images segmentation. With the better representation capability of optical images, we propose to enrich SAR images with generated optical images via the generative adversative network (GAN) trained by numerous SAR and optical images. These optical images can be used as expansions of original SAR images, thus ensuring robust result of segmentation. Then the optical images generated by the GAN are stitched together with the corresponding real images. An attention module following the stitched data is used to strengthen the representation of the objects. Experiments indicate that our method is efficient compared to other commonly used methods