论文标题

无源域适应的变分模型扰动

Variational Model Perturbation for Source-Free Domain Adaptation

论文作者

Jing, Mengmeng, Zhen, Xiantong, Li, Jingjing, Snoek, Cees G. M.

论文摘要

我们的目标是无源域的适应,该任务是在源域上部署预训练的模型到目标域。挑战源于从源到目标域的分布转移,再加上任何源数据的不可用并标记为目标数据以进行优化。我们建议不要通过更新参数来微调模型,而是建议驱动源模型以实现对目标域的适应性。我们通过概率框架中的变异贝叶斯推断将扰动引入模型参数。通过这样做,我们可以有效地将模型调整到目标域,同时在很大程度上保留歧视能力。重要的是,我们证明了与学习贝叶斯神经网络的理论联系,这证明了扰动模型对目标域的普遍性。为了实现更有效的优化,我们进一步采用了参数共享策略,该策略与完全贝叶斯神经网络相比,该策略大大降低了可学习的参数。我们的模型扰动为域适应提供了一种新的概率方式,可以有效适应目标域,同时最大程度地保留源模型中的知识。在三种不同的评估设置下对几个无源基准测试的实验验证了提出的变分模型扰动对无源域适应的有效性。

We aim for source-free domain adaptation, where the task is to deploy a model pre-trained on source domains to target domains. The challenges stem from the distribution shift from the source to the target domain, coupled with the unavailability of any source data and labeled target data for optimization. Rather than fine-tuning the model by updating the parameters, we propose to perturb the source model to achieve adaptation to target domains. We introduce perturbations into the model parameters by variational Bayesian inference in a probabilistic framework. By doing so, we can effectively adapt the model to the target domain while largely preserving the discriminative ability. Importantly, we demonstrate the theoretical connection to learning Bayesian neural networks, which proves the generalizability of the perturbed model to target domains. To enable more efficient optimization, we further employ a parameter sharing strategy, which substantially reduces the learnable parameters compared to a fully Bayesian neural network. Our model perturbation provides a new probabilistic way for domain adaptation which enables efficient adaptation to target domains while maximally preserving knowledge in source models. Experiments on several source-free benchmarks under three different evaluation settings verify the effectiveness of the proposed variational model perturbation for source-free domain adaptation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源