论文标题

遥感任务的多模式对比度学习

Multimodal contrastive learning for remote sensing tasks

论文作者

Jain, Umangi, Wilson, Alex, Gulshan, Varun

论文摘要

自我监督的方法在计算机视觉领域表现出巨大的成功,包括在遥感和医学成像中的应用。最流行的基于损坏的方法,例如SIMCLR,MOCO,MOCO-V2,通过在图像上应用人为的增强来创建正对并将其与负面示例进行对比,从而使用同一图像的多个视图。尽管这些技术运行良好,但这些技术中的大多数都已在ImageNet(以及类似的计算机视觉数据集)上进行了调整。尽管有一些尝试捕获积极样本中更丰富的变形集,但在这项工作中,我们探索了在对比度学习框架内为遥感数据生成积极示例的有希望的替代方法。可以将来自同一位置的不同传感器捕获的图像可以被认为是同一场景的强烈增强实例,从而消除了探索和调整一套手工制作的强大增强的需求。在本文中,我们提出了一个简单的双编码框架,该框架已在Sentinel-1和Sentinel-2图像对的大型未标记数据集(〜1m)上进行了预训练。我们测试了两个遥感下游任务的嵌入:洪水分割和土地覆盖映射,并从经验上表明,从该技术中学到的嵌入优于通过积极的数据增强来收集积极示例的传统技术。

Self-supervised methods have shown tremendous success in the field of computer vision, including applications in remote sensing and medical imaging. Most popular contrastive-loss based methods like SimCLR, MoCo, MoCo-v2 use multiple views of the same image by applying contrived augmentations on the image to create positive pairs and contrast them with negative examples. Although these techniques work well, most of these techniques have been tuned on ImageNet (and similar computer vision datasets). While there have been some attempts to capture a richer set of deformations in the positive samples, in this work, we explore a promising alternative to generating positive examples for remote sensing data within the contrastive learning framework. Images captured from different sensors at the same location and nearby timestamps can be thought of as strongly augmented instances of the same scene, thus removing the need to explore and tune a set of hand crafted strong augmentations. In this paper, we propose a simple dual-encoder framework, which is pre-trained on a large unlabeled dataset (~1M) of Sentinel-1 and Sentinel-2 image pairs. We test the embeddings on two remote sensing downstream tasks: flood segmentation and land cover mapping, and empirically show that embeddings learnt from this technique outperform the conventional technique of collecting positive examples via aggressive data augmentations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源