论文标题

重新审视增强不变代表性学习的关键因素

Revisiting the Critical Factors of Augmentation-Invariant Representation Learning

论文作者

Huang, Junqiang, Kong, Xiangwen, Zhang, Xiangyu

论文摘要

我们专注于更好地理解增强不变代表学学习的关键因素。我们重新访问moco v2和byol,并试图证明以下假设的真实性:不同的框架即使有相同的借口任务也会带来不同特征的表示。我们建立了MoCo V2和Byol之间公平比较的第一个基准,并观察:(i)复杂的模型配置使得可以更好地适应预训练数据集; (ii)从实现竞争性转移表现中,预训练和微调模型的优化策略不匹配。鉴于公平的基准,我们进行了进一步的研究,并发现网络结构的不对称性赋予对比框架在线性评估协议下正常工作,同时可能会损害长尾分类任务的转移性能。此外,负样本并不能使模型更明智地选择数据增强,也不会使不对称网络结构结构。我们相信我们的发现为将来的工作提供了有用的信息。

We focus on better understanding the critical factors of augmentation-invariant representation learning. We revisit MoCo v2 and BYOL and try to prove the authenticity of the following assumption: different frameworks bring about representations of different characteristics even with the same pretext task. We establish the first benchmark for fair comparisons between MoCo v2 and BYOL, and observe: (i) sophisticated model configurations enable better adaptation to pre-training dataset; (ii) mismatched optimization strategies of pre-training and fine-tuning hinder model from achieving competitive transfer performances. Given the fair benchmark, we make further investigation and find asymmetry of network structure endows contrastive frameworks to work well under the linear evaluation protocol, while may hurt the transfer performances on long-tailed classification tasks. Moreover, negative samples do not make models more sensible to the choice of data augmentations, nor does the asymmetric network structure. We believe our findings provide useful information for future work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源