论文标题
无梯度分布式优化,具有精确的收敛性
Gradient-Free Distributed Optimization with Exact Convergence
论文作者
论文摘要
在本文中,引入了无梯度分布式算法,以解决定向通信网络下的集合约束优化问题。具体而言,在每个时间步骤中,代理在本地计算了所谓的伪级,以指导决策变量的更新,这些变量可以应用于未知梯度信息,未可用或不存在的字段中。采用了一种基于剩余的方法来删除加权矩阵上的双随机需求,该方法可以在没有相关的双随机加权矩阵的图中实现该算法。对于收敛结果,所提出的算法能够以任何正,不可能的且非刺激的台阶尺寸获得到最佳值的精确收敛。此外,当阶梯尺寸也是正方形的时,确保所提出的算法可以实现与最佳解决方案的确切收敛性。除标准收敛分析外,还研究了所提出算法的收敛速率。最后,通过数值模拟验证了所提出的算法的有效性。
In this paper, a gradient-free distributed algorithm is introduced to solve a set constrained optimization problem under a directed communication network. Specifically, at each time-step, the agents locally compute a so-called pseudo-gradient to guide the updates of the decision variables, which can be applied in the fields where the gradient information is unknown, not available or non-existent. A surplus-based method is adopted to remove the doubly stochastic requirement on the weighting matrix, which enables the implementation of the algorithm in graphs having no associated doubly stochastic weighting matrix. For the convergence results, the proposed algorithm is able to obtain the exact convergence to the optimal value with any positive, non-summable and non-increasing step-sizes. Furthermore, when the step-size is also square-summable, the proposed algorithm is guaranteed to achieve the exact convergence to an optimal solution. In addition to the standard convergence analysis, the convergence rate of the proposed algorithm is also investigated. Finally, the effectiveness of the proposed algorithm is verified through numerical simulations.