论文标题

通过草绘的顺序二次编程对受约束随机优化的统计推断

Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming

论文作者

Na, Sen, Mahoney, Michael W.

论文摘要

我们考虑在线统计推断受约束的随机非线性优化问题。我们应用随机顺序二次编程(STOSQP)方法来解决这些问题,这些问题可以被视为将二阶牛顿的方法应用于Karush-Kuhn-Tucker(KKT)条件。在每次迭代中,STOSQP方法通过求解二次程序来计算牛顿方向,然后选择适当的自适应步骤$ \barα_t$来更新原始二次迭代。为了降低该方法的主要计算成本,我们通过采用迭代素描求解器来不确定每个迭代中的二次程序。值得注意的是,素描求解器的近似误差不必随着迭代的进行而消失,这意味着每卷计算成本不会爆炸。 For the above StoSQP method, we show that under mild assumptions, the rescaled primal-dual sequence $1/\sqrt{\barα_t}\cdot (x_t - x^\star, λ_t - λ^\star)$ converges to a mean-zero Gaussian distribution with a nontrivial covariance matrix depending on the underlying sketching distribution.为了在实践中执行推断,我们还分析了插件协方差矩阵估计器。我们说明了该方法在最可爱的测试集和线性/非线性约束回归问题上的基准非线性问题的渐近正态性结果。

We consider online statistical inference of constrained stochastic nonlinear optimization problems. We apply the Stochastic Sequential Quadratic Programming (StoSQP) method to solve these problems, which can be regarded as applying second-order Newton's method to the Karush-Kuhn-Tucker (KKT) conditions. In each iteration, the StoSQP method computes the Newton direction by solving a quadratic program, and then selects a proper adaptive stepsize $\barα_t$ to update the primal-dual iterate. To reduce dominant computational cost of the method, we inexactly solve the quadratic program in each iteration by employing an iterative sketching solver. Notably, the approximation error of the sketching solver need not vanish as iterations proceed, meaning that the per-iteration computational cost does not blow up. For the above StoSQP method, we show that under mild assumptions, the rescaled primal-dual sequence $1/\sqrt{\barα_t}\cdot (x_t - x^\star, λ_t - λ^\star)$ converges to a mean-zero Gaussian distribution with a nontrivial covariance matrix depending on the underlying sketching distribution. To perform inference in practice, we also analyze a plug-in covariance matrix estimator. We illustrate the asymptotic normality result of the method both on benchmark nonlinear problems in CUTEst test set and on linearly/nonlinearly constrained regression problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源