论文标题

分布式随机投影求解器用于约束优化

Distributed stochastic projection-free solver for constrained optimization

论文作者

Jiang, Xia, Zeng, Xianlin, Xie, Lihua, Sun, Jian, Chen, Jie

论文摘要

本文提出了一种用于大规模约束有限和有限优化的分布式随机投影算法,其约束集很复杂,因此对约束集的投影可能很昂贵。全局成本函数分配给多个代理,每个代理都计算其本地随机梯度,并与邻居进行通信以解决全局问题。随机梯度方法可实现低计算成本,而由于随机抽样引起的差异,它们的差异很难和缓慢收敛。为了构建一种收敛的分布式随机投影算法,本文在Frank-Wolfe更新中结合了降低方差技术和梯度跟踪技术。我们为降低方差降低技术制定了一个采样规则,以减少随机梯度引入的方差。完整而严格的证据表明,拟议的分布式无预测算法会收敛于均方根收敛速率,并为凸面和非凸目标函数提供了较高的复杂性。通过比较模拟,我们证明了所提出算法的收敛性和计算效率。

This paper proposes a distributed stochastic projection-free algorithm for large-scale constrained finite-sum optimization whose constraint set is complicated such that the projection onto the constraint set can be expensive. The global cost function is allocated to multiple agents, each of which computes its local stochastic gradients and communicates with its neighbors to solve the global problem. Stochastic gradient methods enable low computational cost, while they are hard and slow to converge due to the variance caused by random sampling. To construct a convergent distributed stochastic projection-free algorithm, this paper incorporates a variance reduction technique and gradient tracking technique in the Frank-Wolfe update. We develop a sampling rule for the variance reduction technique to reduce the variance introduced by stochastic gradients. Complete and rigorous proofs show that the proposed distributed projection-free algorithm converges with a sublinear convergence rate and enjoys superior complexity guarantees for both convex and non-convex objective functions. By comparative simulations, we demonstrate the convergence and computational efficiency of the proposed algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源