论文标题
玫瑰:预先训练的语言模型的强大选择性微调
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
论文作者
论文摘要
即使大规模的语言模型取得了出色的表现,但它们仍遭受各种对抗性攻击。已经提出了大量的防御方法。但是,由于冗余攻击搜索空间以及无法防御各种类型的攻击,它们仍然受到限制。在这项工作中,我们提出了一种新颖的微调方法,称为\ textbf {ro} bust \ textbf {se} letive微调(\ textbf {rose})来解决此问题。当对预训练的模型调整到下游任务时,Rose会进行选择性更新,从而滤除参数的宝贵和不稳定的更新。具体而言,我们提出了两种策略:用于选择目标鲁棒参数的一阶和二阶玫瑰。实验结果表明,Rose在各种下游NLP任务上的对抗鲁棒性取得了显着改善,并且合奏方法甚至超过了上面的两个变体。此外,可以轻松地将玫瑰纳入现有的微调方法中,以进一步改善其对抗性鲁棒性。经验分析证实,在微调过程中,Rose消除了不可行的虚假更新,从而导致与传统方法相比,对应于比较平坦和更宽的最佳选择的解决方案。代码可在\ url {https://github.com/jiangllan/rose}中找到。
Even though the large-scale language models have achieved excellent performances, they suffer from various adversarial attacks. A large body of defense methods has been proposed. However, they are still limited due to redundant attack search spaces and the inability to defend against various types of attacks. In this work, we present a novel fine-tuning approach called \textbf{RO}bust \textbf{SE}letive fine-tuning (\textbf{ROSE}) to address this issue. ROSE conducts selective updates when adapting pre-trained models to downstream tasks, filtering out invaluable and unrobust updates of parameters. Specifically, we propose two strategies: the first-order and second-order ROSE for selecting target robust parameters. The experimental results show that ROSE achieves significant improvements in adversarial robustness on various downstream NLP tasks, and the ensemble method even surpasses both variants above. Furthermore, ROSE can be easily incorporated into existing fine-tuning methods to improve their adversarial robustness further. The empirical analysis confirms that ROSE eliminates unrobust spurious updates during fine-tuning, leading to solutions corresponding to flatter and wider optima than the conventional method. Code is available at \url{https://github.com/jiangllan/ROSE}.