论文标题

改善凸出放松界限的紧密度,以确认培训的分类器

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

论文作者

Zhu, Chen, Ni, Renkun, Chiang, Ping-yeh, Li, Hengduo, Huang, Furong, Goldstein, Tom

论文摘要

凸放松对于培训和证明神经网络有效,以防止符合规范的对抗性攻击,但它们在可认证和经验鲁棒性之间留下了很大的差距。原则上,如果对原始非凸问题可行的解决方案是可行的,那么凸松弛可以提供紧密的界限。我们提出了两个可用于训练神经网络的正规化器,以使凸的凸出范围更严格。在我们所有的实验中,提出的正规化器比非调查基线都具有更高的认证准确性。

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness. In principle, convex relaxation can provide tight bounds if the solution to the relaxed problem is feasible for the original non-convex problem. We propose two regularizers that can be used to train neural networks that yield tighter convex relaxation bounds for robustness. In all of our experiments, the proposed regularizers result in higher certified accuracy than non-regularized baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源