论文标题

通过从多个自我监督的学习借口中选择性地捕获互补知识来学习下游任务

Learning Downstream Task by Selectively Capturing Complementary Knowledge from Multiple Self-supervisedly Learning Pretexts

论文作者

Yao, Jiayu, Wu, Qingyuan, Feng, Quan, Chen, Songcan

论文摘要

作为一个新出现的无监督的表示范式,自我监督的学习(SSL)通常遵循两阶段的学习管道:1)学习不变和歧视性表示,并具有自动通道的借口(S),然后是2),然后2)将表示形式转移以协助下游任务。这样的两个阶段通常分别实施,这使得学到的表示对下游任务的不可知论。目前,大多数作品都致力于探索第一阶段。鉴于,关于如何使用已经学习的表示形式,使用有限的标记数据来学习下游任务的研究较少。尤其是,从不同的借口中选择性地利用互补表示来完成下游任务是至关重要的和具有挑战性的。在本文中,我们从技术上提出了一种新的解决方案,利用注意机制适应任务的适当表示。同时,诉诸于信息理论,我们从理论上证明,从不同借口收集代表比单个借口更有效。广泛的实验验证了我们的方案显着超过了当前基于借口的基于借口匹配的方法,以收集知识并减轻下游任务中的负转移。

Self-supervised learning (SSL), as a newly emerging unsupervised representation learning paradigm, generally follows a two-stage learning pipeline: 1) learning invariant and discriminative representations with auto-annotation pretext(s), then 2) transferring the representations to assist downstream task(s). Such two stages are usually implemented separately, making the learned representation learned agnostic to the downstream tasks. Currently, most works are devoted to exploring the first stage. Whereas, it is less studied on how to learn downstream tasks with limited labeled data using the already learned representations. Especially, it is crucial and challenging to selectively utilize the complementary representations from diverse pretexts for a downstream task. In this paper, we technically propose a novel solution by leveraging the attention mechanism to adaptively squeeze suitable representations for the tasks. Meanwhile, resorting to information theory, we theoretically prove that gathering representation from diverse pretexts is more effective than a single one. Extensive experiments validate that our scheme significantly exceeds current popular pretext-matching based methods in gathering knowledge and relieving negative transfer in downstream tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源