论文标题
从亲和力矩阵的角度来对比对比学习的统一框架
A Unified Framework for Contrastive Learning from a Perspective of Affinity Matrix
论文作者
论文摘要
近年来,在许多视觉任务中设计并取得了巨大的成功。通常,这些方法可以大致分为四类:(1)具有类似损失的标准对比方法,例如moco和simclr; (2)仅具有正对的非对比度方法,例如Byol和Simsiam; (3)基于白色正则化的方法,例如W-MSE和VICREG; (4)基于一致性正则化的方法,例如CO2。在这项研究中,我们提出了一个新的统一的对比学习表示框架(命名为UNICLR),适用于上述所有四种方法,从基本亲和力矩阵的新角度。此外,根据UNICLR提出了三种变体,即Simaftilition,Sim Whiting和Simstrace。此外,根据此框架提出了简单的对称损失作为新的一致性正规化项。通过对称亲和力矩阵,我们可以有效地加速训练过程的收敛性。已经进行了广泛的实验,以表明(1)提议的UNICLR框架可以在与艺术的状态相提并论,甚至比最佳的状态取得更好的成果,(2)提出的对称损失可以显着加速模型的收敛性,(3)Simstrace可以通过不依赖或暂停的是或暂停或暂停的方式来避免模式崩溃问题,而不是依靠或暂停。
In recent years, a variety of contrastive learning based unsupervised visual representation learning methods have been designed and achieved great success in many visual tasks. Generally, these methods can be roughly classified into four categories: (1) standard contrastive methods with an InfoNCE like loss, such as MoCo and SimCLR; (2) non-contrastive methods with only positive pairs, such as BYOL and SimSiam; (3) whitening regularization based methods, such as W-MSE and VICReg; and (4) consistency regularization based methods, such as CO2. In this study, we present a new unified contrastive learning representation framework (named UniCLR) suitable for all the above four kinds of methods from a novel perspective of basic affinity matrix. Moreover, three variants, i.e., SimAffinity, SimWhitening and SimTrace, are presented based on UniCLR. In addition, a simple symmetric loss, as a new consistency regularization term, is proposed based on this framework. By symmetrizing the affinity matrix, we can effectively accelerate the convergence of the training process. Extensive experiments have been conducted to show that (1) the proposed UniCLR framework can achieve superior results on par with and even be better than the state of the art, (2) the proposed symmetric loss can significantly accelerate the convergence of models, and (3) SimTrace can avoid the mode collapse problem by maximizing the trace of a whitened affinity matrix without relying on asymmetry designs or stop-gradients.