论文标题

关于无负面对比学习的分离的实证研究

An Empirical Study on Disentanglement of Negative-free Contrastive Learning

论文作者

Cao, Jinkun, Nai, Ruiqian, Yang, Qing, Huang, Jialei, Gao, Yang

论文摘要

无效的对比度学习方法吸引了很多关注,并以简单性和令人印象深刻的表现进行大规模预处理。但是,其拆卸财产仍未探索。在本文中,我们研究了无效的对比度学习方法,以经验研究散布特性。我们发现,现有的删除指标无法对高维表示模型进行有意义的测量,因此我们提出了一个基于潜在表示和数据因素之间相互信息的新的分解度量。通过此提议的度量,我们在流行的合成数据集和现实世界数据集Celeba上基于无效的对比度学习的删除属性。我们的研究表明,研究的方法可以学习一个符合意义的表示子集。据我们所知,我们是第一个将脱离表示形式学习的研究扩展到高维代表空间并将无效的对比度学习方法引入该领域的人。本文的源代码可在\ url {https://github.com/noahcao/disentangement_lib_med}中获得。

Negative-free contrastive learning methods have attracted a lot of attention with simplicity and impressive performances for large-scale pretraining. However, its disentanglement property remains unexplored. In this paper, we examine negative-free contrastive learning methods to study the disentanglement property empirically. We find that existing disentanglement metrics fail to make meaningful measurements for high-dimensional representation models, so we propose a new disentanglement metric based on Mutual Information between latent representations and data factors. With this proposed metric, we benchmark the disentanglement property of negative-free contrastive learning on both popular synthetic datasets and a real-world dataset CelebA. Our study shows that the investigated methods can learn a well-disentangled subset of representation. As far as we know, we are the first to extend the study of disentangled representation learning to high-dimensional representation space and introduce negative-free contrastive learning methods into this area. The source code of this paper is available at \url{https://github.com/noahcao/disentanglement_lib_med}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源