论文标题

先验保证深神经网络有限时间收敛

A priori guarantees of finite-time convergence for Deep Neural Networks

论文作者

Rankawat, Anushree, Rankawat, Mansi, Oza, Harshal B.

论文摘要

在本文中,我们对损失函数进行了基于Lyapunov的分析,以在深神经网络的沉降时间上得出一个先验的上限。尽管先前的研究试图使用控制理论框架了解深度学习,但对先验有限的时间收敛分析的工作有限。从分析非线性系统的有限时间控制的进步中,我们在确定性控制理论环境中提供了有限时间收敛的先验保证。我们将监督的学习框架制定为控制问题,其中网络的权重是控制输入,学习转化为跟踪问题。根据输入的界限假设,计算出有限时间上限的分析公式。最后,我们证明了损失函数对输入扰动的鲁棒性和灵敏度。

In this paper, we perform Lyapunov based analysis of the loss function to derive an a priori upper bound on the settling time of deep neural networks. While previous studies have attempted to understand deep learning using control theory framework, there is limited work on a priori finite time convergence analysis. Drawing from the advances in analysis of finite-time control of non-linear systems, we provide a priori guarantees of finite-time convergence in a deterministic control theoretic setting. We formulate the supervised learning framework as a control problem where weights of the network are control inputs and learning translates into a tracking problem. An analytical formula for finite-time upper bound on settling time is computed a priori under the assumptions of boundedness of input. Finally, we prove the robustness and sensitivity of the loss function against input perturbations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源