论文标题
3DPG:网络多代理系统的分布式确定性策略梯度算法
3DPG: Distributed Deep Deterministic Policy Gradient Algorithms for Networked Multi-Agent Systems
论文作者
论文摘要
我们提出了分布式的深层确定性政策梯度(3DPG),这是一种用于马尔可夫游戏的多代理参与者(MAAC)算法。与以前的MAAC算法不同,在培训和部署过程中,3DPG已完全分布。 3DPG代理根据最近可用的本地数据(状态,行动)和其他代理的本地政策来计算本地政策梯度。在培训期间,使用潜在的有损和延迟的通信网络来交换此信息。因此,该网络诱导信息时代(AOI)用于数据和策略。即使存在潜在的无限年龄(AOI),我们也证明了3DPG的渐近收敛性。这为实用的在线和分布式多项式学习提供了重要的一步,因为3DPG不假定确定性的信息。我们分析了在策略和数据传输在轻度实际假设下的政策传输的情况下分析3DPG。我们的分析表明,3DPG代理以效用函数为局部的局部NASH均衡,其表示为代理局部近似动作值函数(Q-函数)的期望值。本地Q功能的期望是关于限制分布对代理人累积的本地体验所塑造的全球国家行动空间的期望。我们的结果还阐明了通用MAAC算法获得的策略。我们通过启发式论证和数值实验表明,3DPG改善了对以前使用旧动作而不是旧策略的MAAC算法的收敛性。此外,我们表明3DPG对AOI是强大的。即使有大量的AOI和低数据可用性,它也会学习竞争性政策。
We present Distributed Deep Deterministic Policy Gradient (3DPG), a multi-agent actor-critic (MAAC) algorithm for Markov games. Unlike previous MAAC algorithms, 3DPG is fully distributed during both training and deployment. 3DPG agents calculate local policy gradients based on the most recently available local data (states, actions) and local policies of other agents. During training, this information is exchanged using a potentially lossy and delaying communication network. The network therefore induces Age of Information (AoI) for data and policies. We prove the asymptotic convergence of 3DPG even in the presence of potentially unbounded Age of Information (AoI). This provides an important step towards practical online and distributed multi-agent learning since 3DPG does not assume information to be available deterministically. We analyze 3DPG in the presence of policy and data transfer under mild practical assumptions. Our analysis shows that 3DPG agents converge to a local Nash equilibrium of Markov games in terms of utility functions expressed as the expected value of the agents local approximate action-value functions (Q-functions). The expectations of the local Q-functions are with respect to limiting distributions over the global state-action space shaped by the agents' accumulated local experiences. Our results also shed light on the policies obtained by general MAAC algorithms. We show through a heuristic argument and numerical experiments that 3DPG improves convergence over previous MAAC algorithms that use old actions instead of old policies during training. Further, we show that 3DPG is robust to AoI; it learns competitive policies even with large AoI and low data availability.