论文标题
快速完整:通过快速且大规模平行的验证器启用完整的神经网络验证
Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers
论文作者
论文摘要
神经网络(NNS)的正式验证是一个具有挑战性且重要的问题。现有的有效完整求解器通常需要分支和结合的过程,该过程将问题域分解为子域,并使用更快但较弱的不完整的验证者(例如线性放松的亚域线性编程(LP))解决每个子域。在本文中,我们建议在BAB工艺过程中使用基于后向模式的线性松弛分析(LIRPA)替换LP,可以在典型的机器学习加速器(例如GPU和TPU)上有效实施。但是,与LP不同,LIRPA适用时,LIRPA可能会产生较弱的界限,甚至无法检查分裂过程中的某些子域冲突,从而使BAB之后的整个过程不完整。为了应对这些挑战,我们将基于快速的绑定拧紧程序加上批次分割以及最小使用LP绑定程序的使用设计,使我们能够在加速器硬件上有效地使用LIRPA,以挑战完整的NN验证问题,并显着胜过基于LP LP的方法。与现有的基于LP的方法相比,在单个GPU上,我们展示了一个数量级的速度。
Formal verification of neural networks (NNs) is a challenging and important problem. Existing efficient complete solvers typically require the branch-and-bound (BaB) process, which splits the problem domain into sub-domains and solves each sub-domain using faster but weaker incomplete verifiers, such as Linear Programming (LP) on linearly relaxed sub-domains. In this paper, we propose to use the backward mode linear relaxation based perturbation analysis (LiRPA) to replace LP during the BaB process, which can be efficiently implemented on the typical machine learning accelerators such as GPUs and TPUs. However, unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting, making the entire procedure incomplete after BaB. To address these challenges, we apply a fast gradient based bound tightening procedure combined with batch splits and the design of minimal usage of LP bound procedure, enabling us to effectively use LiRPA on the accelerator hardware for the challenging complete NN verification problem and significantly outperform LP-based approaches. On a single GPU, we demonstrate an order of magnitude speedup compared to existing LP-based approaches.