论文标题
量化神经网络的可扩展验证(技术报告)
Scalable Verification of Quantized Neural Networks (Technical Report)
论文作者
论文摘要
神经网络的正式验证是一个积极的研究主题,最近的进步显着增加了验证工具可以处理的网络的大小。但是,大多数方法旨在验证实际网络的理想化模型,该模型在实际算术上起作用并忽略了舍入的不确定。这种理想化与网络量化形成鲜明对比,该技术是一种用于计算效率的数值精度的技术,因此通常在实践中应用。忽略这种低位量化神经网络的舍入错误已显示出对网络正确性的错误结论。因此,验证量化神经网络的所需方法将是考虑这些舍入错误的一种方法。在本文中,我们表明,即使验证理想化的实价网络和仅位比特矢量规格的可满足性,验证具有比特矢量规格的量化神经网络的位实施的实现都是pspace-hard。此外,我们探索了几种实践启发式方法,以缩小理想化和比特验证之间的复杂性差距。特别是,我们提出了三种技术,用于使基于SMT的验证对量化神经网络的验证更加可扩展。我们的实验表明,我们提出的方法允许在现有方法上加速最多三个数量级。
Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. However, most methods are designed for verification of an idealized model of the actual network which works over real arithmetic and ignores rounding imprecisions. This idealization is in stark contrast to network quantization, which is a technique that trades numerical precision for computational efficiency and is, therefore, often applied in practice. Neglecting rounding errors of such low-bit quantized neural networks has been shown to lead to wrong conclusions about the network's correctness. Thus, the desired approach for verifying quantized neural networks would be one that takes these rounding errors into account. In this paper, we show that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP. Furthermore, we explore several practical heuristics toward closing the complexity gap between idealized and bit-exact verification. In particular, we propose three techniques for making SMT-based verification of quantized neural networks more scalable. Our experiments demonstrate that our proposed methods allow a speedup of up to three orders of magnitude over existing approaches.