论文标题
适应性域的适应性
Robustified Domain Adaptation
论文作者
论文摘要
无监督的域适应性(UDA)广泛用于将知识从标记的源域转移到具有不同数据分布的未标记的目标域。尽管广泛的研究证明,深度学习模型容易受到对抗性攻击的影响,但模型在域适应应用中的对抗性鲁棒性已在很大程度上被忽略了。本文指出,UDA中不可避免的域分布偏差是模拟目标域鲁棒性的关键障碍。为了解决这个问题,我们提出了一种新型的班级辅助无监督的鲁棒域适应性(CURDA)框架,用于训练强大的UDA模型。借助引入的对比性鲁棒训练和源锚定的对抗性损失,我们提出的curda框架可以通过同时最大程度地减少数据分布偏差和目标域清洁逆向对逆向对逆逆逆逆逆;几个公共基准测试的实验表明,Curda可以显着提高目标域中的模型鲁棒性,而清洁样品的准确性仅较小。
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain with different data distribution. While extensive studies attested that deep learning models are vulnerable to adversarial attacks, the adversarial robustness of models in domain adaptation application has largely been overlooked. This paper points out that the inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain. To address the problem, we propose a novel Class-consistent Unsupervised Robust Domain Adaptation (CURDA) framework for training robust UDA models. With the introduced contrastive robust training and source anchored adversarial contrastive losses, our proposed CURDA framework can effectively robustify UDA models by simultaneously minimizing the data distribution deviation and the distance between target domain clean-adversarial pairs without creating classification confusion. Experiments on several public benchmarks show that CURDA can significantly improve model robustness in the target domain with only minor cost of accuracy on the clean samples.