论文标题

我们真的需要访问源数据吗?无监督域适应的源假设转移

Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation

论文作者

Liang, Jian, Hu, Dapeng, Feng, Jiashi

论文摘要

无监督的域改编(UDA)旨在利用从标记的源数据集中学到的知识来解决新的未标记域中的类似任务。先前的UDA方法通常需要在学习适应模型时访问源数据,从而使它们具有风险和无效的分散私人数据。这项工作可以解决一个实际的环境,在该设置中,只有一个训练有素的源模型,并研究了我们如何在没有源数据的情况下有效地利用此类模型来解决UDA问题。我们提出了一个简单而通用的表示学习框架,称为\ emph {source假设转移}(shot)。 Shot冻结了源模型的分类器模块(假设),并通过利用信息最大化和自我监督的伪标记来了解目标特定特征提取模块,从而从目标域将隐式表示与源假设的隐式对齐。为了验证其多功能性,我们在各种适应案例中评估了镜头,包括封闭式,部分集和开放式域的适应性。实验表明,射击在多个域适应基准中产生最新的结果。

Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. This work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems. We propose a simple yet generic representation learning framework, named \emph{Source HypOthesis Transfer} (SHOT). SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and self-supervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源