论文标题
通过密集的负面对改善密集的对比度学习
Improving Dense Contrastive Learning with Dense Negative Pairs
论文作者
论文摘要
许多对比表示学习方法学习了整个图像的单一全局表示。但是,诸如DENSECL(Wang等,2021)之类的密集对比表示学习方法可以学习需要更强大的特征空间定位的任务的更好表示形式,例如多标签分类,检测和细分。在这项工作中,我们研究了如何通过修改训练方案和目标功能来提高DENSECL所学的表示的质量,并提出Densecl ++。我们还进行了几项消融研究,以更好地理解以下效果:(i)在不同图像的增强之间形成密集的负面对,(ii)跨视图密集的负面和正面对,以及(iii)辅助重建任务。我们的结果表明,在可可多标签分类中,SIMCLR(Chen等,2020a)和Densecl的地图改进了3.5%和4%。在可可和VOC分段任务中,我们比SIMCLR分别取得了1.8%和0.7%的改进。
Many contrastive representation learning methods learn a single global representation of an entire image. However, dense contrastive representation learning methods such as DenseCL (Wang et al., 2021) can learn better representations for tasks requiring stronger spatial localization of features, such as multi-label classification, detection, and segmentation. In this work, we study how to improve the quality of the representations learned by DenseCL by modifying the training scheme and objective function, and propose DenseCL++. We also conduct several ablation studies to better understand the effects of: (i) various techniques to form dense negative pairs among augmentations of different images, (ii) cross-view dense negative and positive pairs, and (iii) an auxiliary reconstruction task. Our results show 3.5% and 4% mAP improvement over SimCLR (Chen et al., 2020a) andDenseCL in COCO multi-label classification. In COCO and VOC segmentation tasks, we achieve 1.8% and 0.7% mIoU improvements over SimCLR, respectively.