论文标题

凸松弛屏障,重新审视:神经网络验证的收紧的单神经元松弛

The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification

论文作者

Tjandraatmadja, Christian, Anderson, Ross, Huchette, Joey, Ma, Will, Patel, Krunal, Vielma, Juan Pablo

论文摘要

我们利用新的Relu神经元的新凸透松弛度提高了基于传播和线性优化的神经网络验证算法的有效性。与以前的单神经元弛豫不同,仅着眼于relu的单变量输入空间,我们的方法考虑了仿射前激活函数的多生体输入空间。利用凸出和凸几何形状的结果,当此多元输入位于框域上时,我们将明确描述最紧的凸放松。我们表明,我们的凸松弛比常用的单变量放松明显更强,该弛豫已被认为是自然的凸松弛屏障以进行验证。尽管我们对放松的描述可能需要指数级的不等式数量,但我们表明它们可以以线性时间分离,因此可以在需要的基础上有效地纳入优化算法中。基于这种新颖的放松,我们为神经网络验证设计了两种多项式时间算法:一种基于线性编程的算法,利用了我们放松的全部功能,以及一种将现有方法推广的快速传播算法。在这两种情况下,我们都表明,与类似算法相比,我们的加强放松使我们能够验证数量更大的实例。

We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons. Unlike previous single-neuron relaxations which focus only on the univariate input space of the ReLU, our method considers the multivariate input space of the affine pre-activation function preceding the ReLU. Using results from submodularity and convex geometry, we derive an explicit description of the tightest possible convex relaxation when this multivariate input is over a box domain. We show that our convex relaxation is significantly stronger than the commonly used univariate-input relaxation which has been proposed as a natural convex relaxation barrier for verification. While our description of the relaxation may require an exponential number of inequalities, we show that they can be separated in linear time and hence can be efficiently incorporated into optimization algorithms on an as-needed basis. Based on this novel relaxation, we design two polynomial-time algorithms for neural network verification: a linear-programming-based algorithm that leverages the full power of our relaxation, and a fast propagation algorithm that generalizes existing approaches. In both cases, we show that for a modest increase in computational effort, our strengthened relaxation enables us to verify a significantly larger number of instances compared to similar algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源