论文标题
分散的随机近似的有限时间收敛速率,并在多任务和多任务学习中应用
Finite-Time Convergence Rates of Decentralized Stochastic Approximation with Applications in Multi-Agent and Multi-Task Learning
论文作者
论文摘要
我们研究了随机近似的分散变体,这是一种数据驱动的方法,用于在嘈杂的测量中找到操作员的根。一个具有自己的操作员和数据观察的代理网络,合作地通过分散的通信图找到了聚合操作员的固定点。我们的主要贡献是在从马尔可夫过程中采样在每个代理下观察到的数据时,对这种分散的随机近似方法提供有限的时间分析;这种缺乏独立性使迭代的迭代和(可能)无限。在相当标准的假设下,我们表明所提出方法的收敛速率与样本是独立的基本相同,而仅由一个对数因子有所不同,该对数因素是说明了马尔可夫过程的混合时间。我们分析中的关键思想是引入一种新型的Razumikhin-Lyapunov函数,该功能是由用于分析延迟普通微分方程的稳定性的一种动机。我们还讨论了所提出的方法在多代理系统中许多有趣的学习问题上的应用。
We study a decentralized variant of stochastic approximation, a data-driven approach for finding the root of an operator under noisy measurements. A network of agents, each with its own operator and data observations, cooperatively find the fixed point of the aggregate operator over a decentralized communication graph. Our main contribution is to provide a finite-time analysis of this decentralized stochastic approximation method when the data observed at each agent are sampled from a Markov process; this lack of independence makes the iterates biased and (potentially) unbounded. Under fairly standard assumptions, we show that the convergence rate of the proposed method is essentially the same as if the samples were independent, differing only by a log factor that accounts for the mixing time of the Markov processes. The key idea in our analysis is to introduce a novel Razumikhin-Lyapunov function, motivated by the one used in analyzing the stability of delayed ordinary differential equations. We also discuss applications of the proposed method on a number of interesting learning problems in multi-agent systems.