论文标题

梯度隐藏:免费午餐以防御对抗攻击

Gradient Concealment: Free Lunch for Defending Adversarial Attacks

论文作者

Pei, Sen, Sun, Jiaxi, Zhang, Xiaopeng, Meng, Gaofeng

论文摘要

最近的研究表明,深度神经网络(DNN)在各种任务中取得了巨大的成功。但是,即使是\ emph {最先进的基于深度学习的分类器也极易受到对抗性示例的影响,在存在巨大的未知攻击的情况下,也会导致歧视准确性的急剧衰减。鉴于神经网络在开放世界中广泛使用,这可能是至关重要的情况,从而减轻深度学习方法的对抗性效果已成为迫切的需求。通常,传统的DNN可以以巨大的成功率攻击,因为它们的梯度在白色盒子方案中彻底暴露在透露率上,因此毫不费力地破坏了训练有素的分类器,而在原始数据空间中只有无法察觉的扰动。为了解决此问题,我们提出了一个无训练的插件层,称为\ textbf {g} radient \ textbf {c} onCealment \ textbf {m} odule(gcm),掩盖了梯度易于的方向,同时保证在临时时间内进行分类准确性。 GCM报告了ImageNet分类基准的出色防御结果,与Vanilla DNN相比,面对对抗性输入时,面对对抗性输入时,提高了63.41 \%TOP-1攻击鲁棒性(AR)。此外,我们在CVPR 2022强大的分类挑战中使用GCM,目前可以在II阶段中实现\ TextBf {2nd},只有一个很小的Consnext版本。该代码将可用。

Recent studies show that the deep neural networks (DNNs) have achieved great success in various tasks. However, even the \emph{state-of-the-art} deep learning based classifiers are extremely vulnerable to adversarial examples, resulting in sharp decay of discrimination accuracy in the presence of enormous unknown attacks. Given the fact that neural networks are widely used in the open world scenario which can be safety-critical situations, mitigating the adversarial effects of deep learning methods has become an urgent need. Generally, conventional DNNs can be attacked with a dramatically high success rate since their gradient is exposed thoroughly in the white-box scenario, making it effortless to ruin a well trained classifier with only imperceptible perturbations in the raw data space. For tackling this problem, we propose a plug-and-play layer that is training-free, termed as \textbf{G}radient \textbf{C}oncealment \textbf{M}odule (GCM), concealing the vulnerable direction of gradient while guaranteeing the classification accuracy during the inference time. GCM reports superior defense results on the ImageNet classification benchmark, improving up to 63.41\% top-1 attack robustness (AR) when faced with adversarial inputs compared to the vanilla DNNs. Moreover, we use GCM in the CVPR 2022 Robust Classification Challenge, currently achieving \textbf{2nd} place in Phase II with only a tiny version of ConvNext. The code will be made available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源