论文标题

分布式镜下降的线性收敛,并具有积分反馈,以实现强烈凸出问题

Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems

论文作者

Sun, Youbang, Shahrampour, Shahin

论文摘要

分布式优化通常需要找到写入本地功能的总和的全局目标函数的最小值。一组代理共同致力于最大程度地减少全局功能。我们研究了一种连续的时间分散镜下降算法,该算法使用纯粹的局部梯度信息来收敛到全球最佳解决方案。该算法使用积分反馈的想法在代理之间达成共识。最近,Sun和Shahrampour(2020)研究了该算法的渐近收敛性,用于何时全局函数是强烈凸的,但局部函数是凸的。使用控制理论工具,在这项工作中,我们证明该算法确实达到了(局部)指数收敛。我们还提供了真实数据集的数值实验,以验证算法的收敛速度。

Distributed optimization often requires finding the minimum of a global objective function written as a sum of local functions. A group of agents work collectively to minimize the global function. We study a continuous-time decentralized mirror descent algorithm that uses purely local gradient information to converge to the global optimal solution. The algorithm enforces consensus among agents using the idea of integral feedback. Recently, Sun and Shahrampour (2020) studied the asymptotic convergence of this algorithm for when the global function is strongly convex but local functions are convex. Using control theory tools, in this work, we prove that the algorithm indeed achieves (local) exponential convergence. We also provide a numerical experiment on a real data-set as a validation of the convergence speed of our algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源