论文标题
少数最大:用于无监督的对比表示学习的少数射击域的适应
Few-Max: Few-Shot Domain Adaptation for Unsupervised Contrastive Representation Learning
论文作者
论文摘要
对比性自我监督学习方法学会将图像(例如图像)映射到非参数表示空间中,而无需标签。尽管非常成功,但当前方法在训练阶段需要大量数据。在目标训练集大小的情况下,概括是较差的。在大型源数据集和目标样本上进行微调进行预处理很容易在几次射击方案中过度拟合,在几个射击方案中,只有少量的目标样本可用。在此激励的情况下,我们提出了一种用于自我监督的对比学习的域适应方法,称为少数最大的学习方法,以解决对目标分布的适应问题。为了量化表示质量,我们在包括ImageNet,Visda和FastMRI在内的一系列源和目标数据集上评估了很少的最大最大最大,在这些数据集和FastMRI上,很少有最大最大的最大值始终优于其他方法。
Contrastive self-supervised learning methods learn to map data points such as images into non-parametric representation space without requiring labels. While highly successful, current methods require a large amount of data in the training phase. In situations where the target training set is limited in size, generalization is known to be poor. Pretraining on a large source data set and fine-tuning on the target samples is prone to overfitting in the few-shot regime, where only a small number of target samples are available. Motivated by this, we propose a domain adaption method for self-supervised contrastive learning, termed Few-Max, to address the issue of adaptation to a target distribution under few-shot learning. To quantify the representation quality, we evaluate Few-Max on a range of source and target datasets, including ImageNet, VisDA, and fastMRI, on which Few-Max consistently outperforms other approaches.