论文标题

对比性的贝叶斯分析,用于深度度量学习

Contrastive Bayesian Analysis for Deep Metric Learning

论文作者

Kan, Shichao, He, Zhiquan, Cen, Yigang, Li, Yang, Mladenovic, Vladimir, He, Zhihai

论文摘要

深度度量学习的最新方法一直集中在设计样品正面和负面对的不同对比度损失功能上,以便学习的特征嵌入能够将同一类的正样本靠近,并将负面样本从彼此彼此远离。在这项工作中,我们认识到,在最终输出层的中间特征层和类标签之间的特征之间存在显着的语义差距。为了弥合这一差距,我们开发了对比度的贝叶斯分析,以表征和模拟图像标签的后验概率,该概率由其在对比度学习环境中的特征相似性调节。这种对比的贝叶斯分析为深度度量学习带来了新的损失功能。为了提高所提出方法对新类别的概括能力,我们进一步扩展了对比度差异损失,并具有度量方差约束。我们的实验结果和消融研究表明,所提出的对比贝叶斯度量学习方法可显着提高在监督和伪造的场景中深度度量学习的性能,从而超过现有方法,从而大大优于现有方法。

Recent methods for deep metric learning have been focusing on designing different contrastive loss functions between positive and negative pairs of samples so that the learned feature embedding is able to pull positive samples of the same class closer and push negative samples from different classes away from each other. In this work, we recognize that there is a significant semantic gap between features at the intermediate feature layer and class labels at the final output layer. To bridge this gap, we develop a contrastive Bayesian analysis to characterize and model the posterior probabilities of image labels conditioned by their features similarity in a contrastive learning setting. This contrastive Bayesian analysis leads to a new loss function for deep metric learning. To improve the generalization capability of the proposed method onto new classes, we further extend the contrastive Bayesian loss with a metric variance constraint. Our experimental results and ablation studies demonstrate that the proposed contrastive Bayesian metric learning method significantly improves the performance of deep metric learning in both supervised and pseudo-supervised scenarios, outperforming existing methods by a large margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源