论文标题
可分离的凸优化的不精确加速随机ADMM
An Inexact Accelerated Stochastic ADMM for Separable Convex Optimization
论文作者
论文摘要
开发了一种不精确的加速随机交替方向方法(AS-ADMM)方案,用于解决具有线性约束的结构化可分离凸优化问题。目标函数是可能非平滑凸函数的总和和平滑函数,这是许多组件凸功能的平均值。具有这种结构的问题通常在机器学习和数据挖掘应用中出现。 AS-ADMM使用降低方差技术结合了ADMM和随机梯度方法的思想。其中一个ADMM子问题采用线性化技术,而另一个子问题也可以引入类似的线性化。对于指定的算法参数的选择,相对于外迭代$ k $的数量,目的误差和约束违规为$ \ MATHCAL {O}(1/K)$。在强烈的凸度假设下,预期的迭代误差线性收敛为零。还讨论了AS-ADMM和增量采样策略的线性变体。随机和确定性ADMM算法的数值实验表明,AS-ADMM对于在大数据应用中引起的结构化优化特别有效。
An inexact accelerated stochastic Alternating Direction Method of Multipliers (AS-ADMM) scheme is developed for solving structured separable convex optimization problems with linear constraints. The objective function is the sum of a possibly nonsmooth convex function and a smooth function which is an average of many component convex functions. Problems having this structure often arise in machine learning and data mining applications. AS-ADMM combines the ideas of both ADMM and the stochastic gradient methods using variance reduction techniques. One of the ADMM subproblems employs a linearization technique while a similar linearization could be introduced for the other subproblem. For a specified choice of the algorithm parameters, it is shown that the objective error and the constraint violation are $\mathcal{O}(1/k)$ relative to the number of outer iterations $k$. Under a strong convexity assumption, the expected iterate error converges to zero linearly. A linearized variant of AS-ADMM and incremental sampling strategies are also discussed. Numerical experiments with both stochastic and deterministic ADMM algorithms show that AS-ADMM can be particularly effective for structured optimization arising in big data applications.