论文标题

将非天然与自然对抗样本区分开来,以进行更强大的预训练语言模型

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

论文作者

Wang, Jiayi, Bao, Rongzhou, Zhang, Zhuosheng, Zhao, Hai

论文摘要

最近,预训练的语言模型(PRLMS)的鲁棒性问题已获得了越来越多的研究兴趣。关于对抗性攻击的最新研究获得了针对PRLM的高攻击成功率,声称PRLMS并不强大。但是,我们发现PRLM失败的对抗样本主要是非自然的,并且并非现实。我们质疑基于这些非天然对抗样本的当前对PRLM鲁棒性评估的有效性,并提出了一个异常检测器,以评估PRLMS具有更自然的对抗样本的鲁棒性。我们还研究了异常检测器的两个应用:(1)在数据增强中,我们采用异常检测器来迫使生成的增强数据,这些数据被区别为非自然,这为PRLMS的准确性带来了更大的增长。 (2)我们将异常检测器应用于防御框架,以增强PRLM的鲁棒性。与其他防御框架相比,它可用于捍卫所有类型的攻击,并在对抗性样本和合规样本上获得更高的准确性。

Recently, the problem of robustness of pre-trained language models (PrLMs) has received increasing research interest. Latest studies on adversarial attacks achieve high attack success rates against PrLMs, claiming that PrLMs are not robust. However, we find that the adversarial samples that PrLMs fail are mostly non-natural and do not appear in reality. We question the validity of current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples. We also investigate two applications of the anomaly detector: (1) In data augmentation, we employ the anomaly detector to force generating augmented data that are distinguished as non-natural, which brings larger gains to the accuracy of PrLMs. (2) We apply the anomaly detector to a defense framework to enhance the robustness of PrLMs. It can be used to defend all types of attacks and achieves higher accuracy on both adversarial samples and compliant samples than other defense frameworks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源