论文标题

我们可以拥有全部吗?关于神经网络的空间和对抗性鲁棒性之间的权衡

Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks

论文作者

Kamath, Sandesh, Deshpande, Amit, Subrahmanyam, K V, Balasubramanian, Vineeth N

论文摘要

(非)神经网络对小的,对抗像素的扰动的鲁棒性,以及正如最近所示的,甚至对随机的空间转换(例如翻译,旋转)的鲁棒性既可以理论和经验理解。对随机翻译和旋转的空间鲁棒性通常是通过模型模型(例如STDCNNS,GCNNS)和训练增强来实现的,而对抗性鲁棒通常是通过对抗性训练实现的。在本文中,我们证明了在简单的统计环境中空间和对抗性鲁棒性之间的定量权衡。我们通过证明:(a)逐渐稳健性通过逐渐更大的转变进行训练增强而改善,我们的对抗性鲁棒性会逐渐恶化,并且作为最先进的鲁棒模型是对对手训练的,逐渐更大的像素敏感性逐渐逐步逐步逐步逐步逐步逐步逐渐逐步逐渐逐步逐步逐步逐步逐步逐渐逐渐逐渐逐渐训练,我们通过经验证明:(a)在经验上得到了补充:为了在这一权衡中实现帕累托式的优势,我们提出了一种基于课程学习的方法,该方法逐渐培训了更困难的扰动(空间和对抗性),以同时改善空间和对抗性鲁棒性。

(Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e.g., translations, rotations) entreats both theoretical and empirical understanding. Spatial robustness to random translations and rotations is commonly attained via equivariant models (e.g., StdCNNs, GCNNs) and training augmentation, whereas adversarial robustness is typically achieved by adversarial training. In this paper, we prove a quantitative trade-off between spatial and adversarial robustness in a simple statistical setting. We complement this empirically by showing that: (a) as the spatial robustness of equivariant models improves by training augmentation with progressively larger transformations, their adversarial robustness worsens progressively, and (b) as the state-of-the-art robust models are adversarially trained with progressively larger pixel-wise perturbations, their spatial robustness drops progressively. Towards achieving pareto-optimality in this trade-off, we propose a method based on curriculum learning that trains gradually on more difficult perturbations (both spatial and adversarial) to improve spatial and adversarial robustness simultaneously.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源