论文标题

可证明基于模型的政策适应

Provably Efficient Model-based Policy Adaptation

论文作者

Song, Yuda, Mavalankar, Aditi, Sun, Wen, Gao, Sicun

论文摘要

强化学习的高样本复杂性挑战了其在实践中的使用。一种有希望的方法是将预先培训的政策快速适应新环境。此策略适应问题的现有方法通常依赖于域随机化和元学习,这是通过在预训练期间从目标环境的某些分布中抽样的,因此在分布外目标环境上遇到困难。我们提出了新的基于模型的机制,这些机制能够通过结合No-Regret在线学习和自适应控制的想法来在未见目标环境中进行在线适应。我们证明该方法在目标环境中学习了可以快速从源环境中恢复轨迹的策略,并在一般环境中建立收敛速度。我们证明了我们在各种持续控制任务中进行政策适应的好处,从而实现了样本复杂性要低得多的最先进方法的性能。

The high sample complexity of reinforcement learning challenges its use in practice. A promising approach is to quickly adapt pre-trained policies to new environments. Existing methods for this policy adaptation problem typically rely on domain randomization and meta-learning, by sampling from some distribution of target environments during pre-training, and thus face difficulty on out-of-distribution target environments. We propose new model-based mechanisms that are able to make online adaptation in unseen target environments, by combining ideas from no-regret online learning and adaptive control. We prove that the approach learns policies in the target environment that can quickly recover trajectories from the source environment, and establish the rate of convergence in general settings. We demonstrate the benefits of our approach for policy adaptation in a diverse set of continuous control tasks, achieving the performance of state-of-the-art methods with much lower sample complexity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源