论文标题

针对倒数第二个激活的对抗训练的域适应

Domain Adaptation with Adversarial Training on Penultimate Activations

论文作者

Sun, Tao, Lu, Cheng, Ling, Haibin

论文摘要

增强对目标数据的模型预测置信度是无监督域适应(UDA)的重要目标。在本文中,我们探讨了关于倒数第二个激活的对抗性训练,即最终线性分类层的输入特征。我们表明,这种策略比以前的作品中使用的对对抗性图像或中间特征的对抗性训练更有效,并且与提高预测置信度的目的更加相关。此外,通过在域适应中通常使用激活归一化以减少域间隙,我们得出了两个变体,并系统地分析了归一化对对抗性训练的影响。理论和通过对实际适应任务的经验分析都可以说明这一点。在标准设置和无源DATA设置下,对流行的UDA基准测试进行了广泛的实验。结果证明了我们的方法可以在以前的艺术中取得最佳分数。代码可在https://github.com/tsun/apa上找到。

Enhancing model prediction confidence on target data is an important objective in Unsupervised Domain Adaptation (UDA). In this paper, we explore adversarial training on penultimate activations, i.e., input features of the final linear classification layer. We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features, as used in previous works. Furthermore, with activation normalization commonly used in domain adaptation to reduce domain gap, we derive two variants and systematically analyze the effects of normalization on our adversarial training. This is illustrated both in theory and through empirical analysis on real adaptation tasks. Extensive experiments are conducted on popular UDA benchmarks under both standard setting and source-data free setting. The results validate that our method achieves the best scores against previous arts. Code is available at https://github.com/tsun/APA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源