论文标题

自适应原始双重随机梯度方法,用于期望约束的凸随机程序

Adaptive Primal-Dual Stochastic Gradient Method for Expectation-constrained Convex Stochastic Programs

论文作者

Yan, Yonggui, Xu, Yangyang

论文摘要

随机梯度方法(SGM)已被广泛用于解决随机优化问题。大多数现有作品没有任何约束或易于项目的约束。在本文中,我们考虑了凸的随机优化问题,这些问题与期望约束。对于这些问题,对可行集合进行投影通常非常昂贵。文献中的几个SGM可用于解决期望约束的随机问题。我们根据拉格朗日函数提出了一种新型的原始偶型SGM。与现有方法不同,我们的方法结合了一种自适应技术来加快融合的速度。在每次迭代中,我们的方法都会询问拉格朗日函数的无偏随机亚级别,然后通过自适应-SGM更新和双重变量通过Vanilla-SGM更新来恢复原始变量。我们表明,在客观错误和约束违规方面,提出的方法的收敛速率为$ O(1/\ sqrt {k})$。尽管收敛速率与现有SGM的收敛速率相同,但我们观察到它的收敛速度明显比现有的非自适应原始二极管SGM和针对解决Neyman-Pearson分类和四次约束四次二次程序的原始SGM的收敛速度要快。此外,我们修改了求解凸连接随机最小问题的提出的方法,为此我们对原始变量和双重变量进行自适应-SGM更新。还建立了$ o(1/\ sqrt {k})$的收敛速率,以根据原始二次差距解决修改方法,以解决最小值问题。

Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex stochastic optimization problems with expectation constraints. For these problems, it is often extremely expensive to perform projection onto the feasible set. Several SGMs in the literature can be applied to solve the expectation-constrained stochastic problems. We propose a novel primal-dual type SGM based on the Lagrangian function. Different from existing methods, our method incorporates an adaptiveness technique to speed up convergence. At each iteration, our method inquires an unbiased stochastic subgradient of the Lagrangian function, and then it renews the primal variables by an adaptive-SGM update and the dual variables by a vanilla-SGM update. We show that the proposed method has a convergence rate of $O(1/\sqrt{k})$ in terms of the objective error and the constraint violation. Although the convergence rate is the same as those of existing SGMs, we observe its significantly faster convergence than an existing non-adaptive primal-dual SGM and a primal SGM on solving the Neyman-Pearson classification and quadratically constrained quadratic programs. Furthermore, we modify the proposed method to solve convex-concave stochastic minimax problems, for which we perform adaptive-SGM updates to both primal and dual variables. A convergence rate of $O(1/\sqrt{k})$ is also established to the modified method for solving minimax problems in terms of primal-dual gap.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源