论文标题

对抗训练的线性回归的惊喜

Surprises in adversarially-trained linear regression

论文作者

Ribeiro, Antônio H., Zachariah, Dave, Schön, Thomas B.

论文摘要

最先进的机器学习模型可能容易受到对抗构建的非常小的输入扰动的影响。对抗训练是防御此类例子的有效方法。它被称为最小问题问题,当训练数据被最坏的攻击破坏时,请搜索最佳解决方案。对于线性回归问题,可以将对抗性训练作为凸问题提出。我们使用这种重新制定来做出两种技术贡献:首先,我们将培训问题作为强大回归的一个实例,以揭示其与参数削减方法的联系,特别是$ \ ell_ \ ell_ \ elfty $ vys-corversarial培训会产生稀疏的解决方案。其次,我们研究过度参数化制度的对抗训练,即当参数多于数据时。我们证明,对较小干扰的对抗性训练可以用最小值来插入训练数据。脊回归和拉索近似插值解决方案,例如其正则化参数消失。相比之下,对于对抗性训练,向插值制度的过渡是突然的,对于扰动的非零值。通过数值示例证明并说明了此结果。

State-of-the-art machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial training is an effective approach to defend against such examples. It is formulated as a min-max problem, searching for the best solution when the training data was corrupted by the worst-case attacks. For linear regression problems, adversarial training can be formulated as a convex problem. We use this reformulation to make two technical contributions: First, we formulate the training problem as an instance of robust regression to reveal its connection to parameter-shrinking methods, specifically that $\ell_\infty$-adversarial training produces sparse solutions. Secondly, we study adversarial training in the overparameterized regime, i.e. when there are more parameters than data. We prove that adversarial training with small disturbances gives the solution with the minimum-norm that interpolates the training data. Ridge regression and lasso approximate such interpolating solutions as their regularization parameter vanishes. By contrast, for adversarial training, the transition into the interpolation regime is abrupt and for non-zero values of disturbance. This result is proved and illustrated with numerical examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源