论文标题
关于优化的解决方案功能:通用近似和覆盖数字界限
On Solution Functions of Optimization: Universal Approximation and Covering Number Bounds
论文作者
论文摘要
我们研究了凸优化解决方案功能及其多层架构扩展的表达性和可学习性。主要结果是:\ emph {(1)}线性编程(LP)和二次编程(QP)的解决方案功能是$ c^k $平稳的模型类别或某些受限的sobolev空间的通用近似值,并且我们对速率 - 持续数据进行了特征\ emph {(2)的特征,该数据是近似值的,该目标是对近似值的指定,该目标是在近似值中进行了调查,该功能是在视图上进行的。观察结果,以优化为一层的深度体系结构的形式的\ emph {(3)}的组成性显示出可重建数值分析中使用的一些基本功能而无需错误的基本功能,这意味着\ emph {(4)}可以通过通用网络架构进行大幅度降低速率限制的速度,并讨论EMPH emph {(5)我们的范围,我们可以将其讨论。 LP/QP,以及通过利用驯服几何形状来实现通用优化问题(可能是非凸)。我们的结果提供了\ emph {第一个严格分析解决方案功能的近似和学习理论特性},对算法设计和性能保证有影响。
We study the expressibility and learnability of convex optimization solution functions and their multi-layer architectural extension. The main results are: \emph{(1)} the class of solution functions of linear programming (LP) and quadratic programming (QP) is a universal approximant for the $C^k$ smooth model class or some restricted Sobolev space, and we characterize the rate-distortion, \emph{(2)} the approximation power is investigated through a viewpoint of regression error, where information about the target function is provided in terms of data observations, \emph{(3)} compositionality in the form of a deep architecture with optimization as a layer is shown to reconstruct some basic functions used in numerical analysis without error, which implies that \emph{(4)} a substantial reduction in rate-distortion can be achieved with a universal network architecture, and \emph{(5)} we discuss the statistical bounds of empirical covering numbers for LP/QP, as well as a generic optimization problem (possibly nonconvex) by exploiting tame geometry. Our results provide the \emph{first rigorous analysis of the approximation and learning-theoretic properties of solution functions} with implications for algorithmic design and performance guarantees.