论文标题

基于SDP的神经网络验证的和弦稀疏性

Chordal Sparsity for SDP-based Neural Network Verification

论文作者

Xue, Anton, Lindemann, Lars, Alur, Rajeev

论文摘要

神经网络对于许多新兴技术都是核心,但是验证其正确性仍然是一个主要挑战。众所周知,网络输出对即使是小的输入扰动也可能是敏感和脆弱的,从而增加了无法预测和不良行为的风险。因此,对神经网络的快速准确验证对于它们的广泛采用至关重要,近年来,已经开发了各种方法作为对此问题的回应。在本文中,我们专注于改善基于半神经网络验证的基于半决赛的编程(SDP)技术。这种技术提供了表达复杂几何约束的力量,同时保留了凸问题制定,但是在实践中,可伸缩性仍然是一个主要问题。我们的起点是Fazlyab等人提出的DEEPSDP框架,该框架使用二次约束将验证问题抽象为大型SDP。但是,当网络增长时,解决此SDP很快就会棘手。我们的主要观察结果是,通过利用弦宽度,我们可以将DeepSDP的主要计算瓶颈(一种大型线性矩阵不等式(LMI))分解为同等的较小LMI的集合。我们称我们的和弦稀疏的优化程序Chordal-DeepSDP,并证明其构造与DeepSDP相同。此外,我们表明,对Chordal-DeepSDP的附加分析使我们能够在第二个分解级别中进一步重写其LMI的收集,我们称之为Chordal-DeepSDP-2,这导致了另一个重要的计算增益。最后,我们在学习的卡特杆动力学的真实网络上提供了数值实验,展示了Chordal-DeepSDP和Chordal-DeepSDP-2的计算优势,而不是DEEPSDP。

Neural networks are central to many emerging technologies, but verifying their correctness remains a major challenge. It is known that network outputs can be sensitive and fragile to even small input perturbations, thereby increasing the risk of unpredictable and undesirable behavior. Fast and accurate verification of neural networks is therefore critical to their widespread adoption, and in recent years, various methods have been developed as a response to this problem. In this paper, we focus on improving semidefinite programming (SDP) based techniques for neural network verification. Such techniques offer the power of expressing complex geometric constraints while retaining a convex problem formulation, but scalability remains a major issue in practice. Our starting point is the DeepSDP framework proposed by Fazlyab et al., which uses quadratic constraints to abstract the verification problem into a large-scale SDP. However, solving this SDP quickly becomes intractable when the network grows. Our key observation is that by leveraging chordal sparsity, we can decompose the primary computational bottleneck of DeepSDP -- a large linear matrix inequality (LMI) -- into an equivalent collection of smaller LMIs. We call our chordally sparse optimization program Chordal-DeepSDP and prove that its construction is identically expressive as that of DeepSDP. Moreover, we show that additional analysis of Chordal-DeepSDP allows us to further rewrite its collection of LMIs in a second level of decomposition that we call Chordal-DeepSDP-2 -- which results in another significant computational gain. Finally, we provide numerical experiments on real networks of learned cart-pole dynamics, showcasing the computational advantage of Chordal-DeepSDP and Chordal-DeepSDP-2 over DeepSDP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源