论文标题

在视觉任务中,自我监督预读的预期有多么有用?

How Useful is Self-Supervised Pretraining for Visual Tasks?

论文作者

Newell, Alejandro, Deng, Jia

论文摘要

最近的进步刺激了自我监督的视觉预处理的令人难以置信的进步。我们研究哪些因素可能在这些预训练方法的实践中起作用。为此,我们评估了各种合成数据集和下游任务的各种自我监督算法。我们准备了一套合成数据套件,该数据可实现无限的带注释的图像以及对数据集难度的完全控制。我们的实验提供了有关自我实用性标签数量的增加以及效用如何随着下游任务的函数和培训数据的函数而变化的洞察力。我们还发现线性评估与芬特的性能无关。代码和数据可在\ href {https://www.github.com/princeton-vl/selfstudy} {github.com/princeton-vl/selfstudy}中获得。

Recent advances have spurred incredible progress in self-supervised pretraining for vision. We investigate what factors may play a role in the utility of these pretraining methods for practitioners. To do this, we evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks. We prepare a suite of synthetic data that enables an endless supply of annotated images as well as full control over dataset difficulty. Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows as well as how the utility changes as a function of the downstream task and the properties of the training data. We also find that linear evaluation does not correlate with finetuning performance. Code and data is available at \href{https://www.github.com/princeton-vl/selfstudy}{github.com/princeton-vl/selfstudy}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源