论文标题

深层增强学习的单调价值功能分解

Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

论文作者

Rashid, Tabish, Samvelyan, Mikayel, de Witt, Christian Schroeder, Farquhar, Gregory, Foerster, Jakob, Whiteson, Shimon

论文摘要

在许多现实世界中,一组代理团队必须以分散的方式协调其行为。同时,通常可以以集中的方式训练代理商,其中有全球国家信息可用并取消通信限制。以额外的状态信息为条件的学习联合行动价值是利用集中学习的一种有吸引力的方式,但是当时提取分散政策的最佳策略尚不清楚。我们的解决方案是QMIX,这是一种基于价值的新方法,可以以集中式的端到端方式训练分散的政策。 QMIX采用了一个混合网络,该网络将联合动作值估计为每个代理值的单调组合。我们在结构上强制强制认为,通过在混合网络中使用非负权重,在每个代理值中是单调的,这可以确保集中式策略和分散策略之间的一致性。为了评估QMIX的性能,我们提出了星际争霸多代理挑战(SMAC),作为深度多代理增强学习的新基准。我们在一系列具有挑战性的SMAC方案中评估QMIX,并表明它的表现明显优于现有的多代理增强学习方法。

In many real-world settings, a team of agents must coordinate its behaviour while acting in a decentralised fashion. At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a mixing network that estimates joint action-values as a monotonic combination of per-agent values. We structurally enforce that the joint-action value is monotonic in the per-agent values, through the use of non-negative weights in the mixing network, which guarantees consistency between the centralised and decentralised policies. To evaluate the performance of QMIX, we propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning. We evaluate QMIX on a challenging set of SMAC scenarios and show that it significantly outperforms existing multi-agent reinforcement learning methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源