论文标题

仿射不变的良好训练

Affine-Invariant Robust Training

论文作者

Mayor, Oriol Barbany

论文摘要

对抗性鲁棒性领域在机器学习中引起了极大的关注。与平均情况下准确的训练模型的常见方法相反,它旨在培训模型,这些模型对于最坏情况的输入而言是准确的,因此可以产生更强大和可靠的模型。换句话说,它试图防止对手欺骗模型。对抗性鲁棒性的研究主要集中在$ \ ell_p- $有界的对抗扰动上,即输入的修改,以某些$ \ ell_p $ norm的限制。然而,已经表明,最新的模型也容易受到其他更自然的扰动(例如仿射转换)的攻击,这些模型在数据增强中已经在机器学习中被考虑。该项目回顾了以前的空间鲁棒性方法的工作,并提出了进化策略作为零订单优化算法,以找到每个输入的最差仿射变换。所提出的方法有效地产生了强大的模型,并允许引入非参数对抗扰动。

The field of adversarial robustness has attracted significant attention in machine learning. Contrary to the common approach of training models that are accurate in average case, it aims at training models that are accurate for worst case inputs, hence it yields more robust and reliable models. Put differently, it tries to prevent an adversary from fooling a model. The study of adversarial robustness is largely focused on $\ell_p-$bounded adversarial perturbations, i.e. modifications of the inputs, bounded in some $\ell_p$ norm. Nevertheless, it has been shown that state-of-the-art models are also vulnerable to other more natural perturbations such as affine transformations, which were already considered in machine learning within data augmentation. This project reviews previous work in spatial robustness methods and proposes evolution strategies as zeroth order optimization algorithms to find the worst affine transforms for each input. The proposed method effectively yields robust models and allows introducing non-parametric adversarial perturbations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源