论文标题

在输入输出规范下学习神经网络

Learning Neural Networks under Input-Output Specifications

论文作者

Abdeen, Zain ul, Yin, He, Kekatos, Vassilis, Jin, Ming

论文摘要

在本文中,我们研究了学习神经网络的重要问题,这些问题可以证明符合投入输出行为的某些规格。我们的策略是找到一组可接受的策略参数的内部近似值,该参数是转换空间中的凸。为此,我们解决了神经网络的验证条件的关键技术挑战,该验证条件是通过抽象具有二次约束的非线性规范和激活函数来得出的。特别是,我们基于循环转换提出了原始神经网络的重新处理方案,这导致了可以在学习过程中实施的凸条件。该理论结构在一个实验中得到了验证,该实验指定了针对不同输入区域的可达集合。

In this paper, we examine an important problem of learning neural networks that certifiably meet certain specifications on input-output behaviors. Our strategy is to find an inner approximation of the set of admissible policy parameters, which is convex in a transformed space. To this end, we address the key technical challenge of convexifying the verification condition for neural networks, which is derived by abstracting the nonlinear specifications and activation functions with quadratic constraints. In particular, we propose a reparametrization scheme of the original neural network based on loop transformation, which leads to a convex condition that can be enforced during learning. This theoretical construction is validated in an experiment that specifies reachable sets for different regions of inputs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源