论文标题
与无限可能性的联合对比学习
Joint Contrastive Learning with Infinite Possibilities
论文作者
论文摘要
本文通过新颖的概率建模探讨了对比度学习的最新发展的有用修改。我们得出了一种特定形式的对比损失,称为关节对比学习(JCL)。 JCL隐式涉及同时学习无限数量的查询对对,在搜索不变特征时,它会构成更严格的约束。我们在此公式上得出了上限,该公式允许以端到端的训练方式进行分析解决方案。尽管JCL实际上在众多计算机视觉应用中有效,但理论上也揭示了控制JCL行为的某些机制。我们证明,拟议的配方构成了一个天生的代理机构,该机构强烈利用每个实例特定类别中的相似性,因此在搜索不同实例之间寻找判别特征时仍然有利。我们在多个基准上评估了这些建议,证明了对现有算法的大量改进。代码可公开,网址为:https://github.com/caiqi/joint-contrastive-learning。
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling. We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL). JCL implicitly involves the simultaneous learning of an infinite number of query-key pairs, which poses tighter constraints when searching for invariant features. We derive an upper bound on this formulation that allows analytical solutions in an end-to-end training manner. While JCL is practically effective in numerous computer vision applications, we also theoretically unveil the certain mechanisms that govern the behavior of JCL. We demonstrate that the proposed formulation harbors an innate agency that strongly favors similarity within each instance-specific class, and therefore remains advantageous when searching for discriminative features among distinct instances. We evaluate these proposals on multiple benchmarks, demonstrating considerable improvements over existing algorithms. Code is publicly available at: https://github.com/caiqi/Joint-Contrastive-Learning.