论文标题

自我监督模型的转移程度如何?

How Well Do Self-Supervised Models Transfer?

论文作者

Ericsson, Linus, Gouk, Henry, Hospedales, Timothy M.

论文摘要

自我监督的视觉表示学习最近取得了巨大进步,但是没有大规模的评估比较了现在可用的许多模型。我们在40个下游任务上评估了13个顶级自我监督模型的转移性能,包括许多射击和很少的识别,对象检测和密集的预测。我们将它们的性能与受监督的基线进行比较,并表明在大多数任务上,最好的自我监管模型都超过了监督,从而证实了最近观察到的文献趋势。我们发现Imagenet TOP-1准确性与传递到许多射击识别的高度相关,但对于几次射击,对象检测和密集的预测越来越少。总体上没有单一的自我监督方法主导,这表明尚未解决普遍的预训练。我们对特征的分析表明,最高自我监督的学习者无法保留颜色信息和监督的替代方案,但往往会引起更好的分类器校准,而不是专注于监督的学习者。

Self-supervised visual representation learning has seen huge progress recently, but no large scale evaluation has compared the many models now available. We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction. We compare their performance to a supervised baseline and show that on most tasks the best self-supervised models outperform supervision, confirming the recently observed trend in the literature. We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition, but increasingly less so for few-shot, object detection and dense prediction. No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved. Our analysis of features suggests that top self-supervised learners fail to preserve colour information as well as supervised alternatives, but tend to induce better classifier calibration, and less attentive overfitting than supervised learners.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源