论文标题
分布式随机组成优化问题的数值方法与聚集结构
Numerical Methods for Distributed Stochastic Compositional Optimization Problems with Aggregative Structure
论文作者
论文摘要
本文研究网络上分布式随机组成优化问题,其中所有代理的内部级函数都是每个代理的私人期望函数的总和。着眼于内部函数的聚合结构,我们采用了混合方差降低方法来获取有关每个代理的私人期望函数的信息,并应用动态共识机制来跟踪每个代理的内部级别函数的信息。然后,通过与标准分布的随机梯度下降法结合,我们提出了分布式聚集的随机成分梯度下降法。当目标函数平滑时,提出的方法实现了最佳收敛速率$ \ mathcal {o} \ left(k^{ - 1/2} \ right)$。我们进一步将提出的方法与通信压缩相结合,并提出了通信压缩变体分布式汇总组成梯度下降方法。该方法的压缩变体维护最佳收敛速率$ \ MATHCAL {O} \ left(k^{ - 1/2} \ right)$。对分散的增强学习的模拟实验验证了所提出的方法的有效性。
The paper studies the distributed stochastic compositional optimization problems over networks, where all the agents' inner-level function is the sum of each agent's private expectation function. Focusing on the aggregative structure of the inner-level function, we employ the hybrid variance reduction method to obtain the information on each agent's private expectation function, and apply the dynamic consensus mechanism to track the information on each agent's inner-level function. Then by combining with the standard distributed stochastic gradient descent method, we propose a distributed aggregative stochastic compositional gradient descent method. When the objective function is smooth, the proposed method achieves the optimal convergence rate $\mathcal{O}\left(K^{-1/2}\right)$. We further combine the proposed method with the communication compression and propose the communication compressed variant distributed aggregative stochastic compositional gradient descent method. The compressed variant of the proposed method maintains the optimal convergence rate $\mathcal{O}\left(K^{-1/2}\right)$. Simulated experiments on decentralized reinforcement learning verify the effectiveness of the proposed methods.