论文标题
串行马尔可夫链推理的政策梯度
Policy Gradient With Serial Markov Chain Reasoning
论文作者
论文摘要
我们介绍了一个新的框架,该框架在增强学习中执行决策(RL)作为迭代推理过程。我们将代理行为模拟为参数化推理马尔可夫链(RMC)的稳态分布,并通过对策略梯度的新易进行估计进行了优化。我们通过模拟RMC来执行动作选择,以获取足够的推理步骤以接近其稳态分布。我们显示我们的框架具有几种有用的属性,这些属性本质上却固有地缺少。例如,它允许代理行为通过使用简单的高斯过渡函数参数化RMC来近似任何连续分布。此外,达到融合的推理步骤数量可以随着每个动作选择决策的难度而适应地扩展,并且可以通过重新使用过去的解决方案来加速。我们由此产生的算法在流行的Mujoco和DeepMind Control基准测试中实现了最先进的性能,包括基于本体感受和基于像素的任务。
We introduce a new framework that performs decision-making in reinforcement learning (RL) as an iterative reasoning process. We model agent behavior as the steady-state distribution of a parameterized reasoning Markov chain (RMC), optimized with a new tractable estimate of the policy gradient. We perform action selection by simulating the RMC for enough reasoning steps to approach its steady-state distribution. We show our framework has several useful properties that are inherently missing from traditional RL. For instance, it allows agent behavior to approximate any continuous distribution over actions by parameterizing the RMC with a simple Gaussian transition function. Moreover, the number of reasoning steps to reach convergence can scale adaptively with the difficulty of each action selection decision and can be accelerated by re-using past solutions. Our resulting algorithm achieves state-of-the-art performance in popular Mujoco and DeepMind Control benchmarks, both for proprioceptive and pixel-based tasks.