论文标题

通过非convex低级半决赛的对抗训练的神经网络的紧密认证

Tight Certification of Adversarially Trained Neural Networks via Nonconvex Low-Rank Semidefinite Relaxations

论文作者

Chiu, Hong-Ming, Zhang, Richard Y.

论文摘要

对抗性训练众所周知,可以产生高质量的神经网络模型,这些模型在经验上与对抗性扰动具有良好的良好性。然而,一旦模型受到了对抗训练,人们经常希望证明该模型对所有未来的攻击确实很强大。不幸的是,当面对经过对抗训练的模型时,所有现有的方法都有很大的麻烦制定足够强大以至于实际上有用的认证。线性编程(LP)技术尤其面临着“凸松弛障碍”,即使在使用混合组合线性编程(MILP)和分支机构(BNB)技术进行改进之后,即使在细化后,它们也无法获得高质量的认证。在本文中,我们提出了一种基于对半决赛编程(SDP)放松的低排名限制。 NonConvex放松的认证可以与更昂贵的SDP方法相媲美,同时优化了与较弱的LP方法相当的较少变量。尽管非概念性,我们还展示了如何使用现成的本地优化算法来实现和证明多项式时间的全球最优性。我们的实验发现,非凸放松几乎完全缩小了对对抗训练模型的精确认证的差距。

Adversarial training is well-known to produce high-quality neural network models that are empirically robust against adversarial perturbations. Nevertheless, once a model has been adversarially trained, one often desires a certification that the model is truly robust against all future attacks. Unfortunately, when faced with adversarially trained models, all existing approaches have significant trouble making certifications that are strong enough to be practically useful. Linear programming (LP) techniques in particular face a "convex relaxation barrier" that prevent them from making high-quality certifications, even after refinement with mixed-integer linear programming (MILP) and branch-and-bound (BnB) techniques. In this paper, we propose a nonconvex certification technique, based on a low-rank restriction of a semidefinite programming (SDP) relaxation. The nonconvex relaxation makes strong certifications comparable to much more expensive SDP methods, while optimizing over dramatically fewer variables comparable to much weaker LP methods. Despite nonconvexity, we show how off-the-shelf local optimization algorithms can be used to achieve and to certify global optimality in polynomial time. Our experiments find that the nonconvex relaxation almost completely closes the gap towards exact certification of adversarially trained models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源