论文标题
分布式优化算法的通用分解
A Universal Decomposition for Distributed Optimization Algorithms
论文作者
论文摘要
在多代理系统的分布式优化问题中,每个代理都知道局部功能,并且必须通过执行局部梯度评估并与相邻代理进行局部梯度评估并传达信息的组合来找到所有代理本地函数总和的最小化器。我们证明,每个分布式优化算法都可以分解为集中优化方法和二阶共识估计器,从而有效地将“优化”和“共识”任务分开。我们通过为许多最近提出的分布式优化算法提供分解来说明这一事实。相反,我们证明,在集中设置中收敛的任何优化方法都可以与任何二阶共有估计器结合使用,以形成在多代理设置中收敛的分布式优化算法。最后,我们描述了分解如何导致更系统的算法设计方法。
In the distributed optimization problem for a multi-agent system, each agent knows a local function and must find a minimizer of the sum of all agents' local functions by performing a combination of local gradient evaluations and communicating information with neighboring agents. We prove that every distributed optimization algorithm can be factored into a centralized optimization method and a second-order consensus estimator, effectively separating the "optimization" and "consensus" tasks. We illustrate this fact by providing the decomposition for many recently proposed distributed optimization algorithms. Conversely, we prove that any optimization method that converges in the centralized setting can be combined with any second-order consensus estimator to form a distributed optimization algorithm that converges in the multi-agent setting. Finally, we describe how our decomposition may lead to a more systematic algorithm design methodology.