论文标题
与任务无关状态抽象的因果动力学学习
Causal Dynamics Learning for Task-Independent State Abstraction
论文作者
论文摘要
精确学习动力学模型是基于模型的增强学习(MBRL)的重要目标,但是大多数MBRL方法都学习一个易于伪造相关性的密集动态模型,因此对看不见的状态的推广不佳。在本文中,我们引入了与任务无关的状态抽象(CDL)的因果动力学学习,该学习首先学习了理论上证明的因果动力学模型,该模型消除了状态变量和动作之间不必要的依赖性,从而很好地推广到了看不见的状态。然后可以从学习的动力学中得出状态抽象,这不仅提高了样本效率,而且还适用于与现有状态抽象方法更广泛的任务。在两个模拟环境和下游任务上进行了评估,所提出的方法学到的动力学模型和政策都可以很好地推广到看不见的状态,而派生的态度抽象则提高了样本效率,而没有它。
Learning dynamics models accurately is an important goal for Model-Based Reinforcement Learning (MBRL), but most MBRL methods learn a dense dynamics model which is vulnerable to spurious correlations and therefore generalizes poorly to unseen states. In this paper, we introduce Causal Dynamics Learning for Task-Independent State Abstraction (CDL), which first learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action, thus generalizing well to unseen states. A state abstraction can then be derived from the learned dynamics, which not only improves sample efficiency but also applies to a wider range of tasks than existing state abstraction methods. Evaluated on two simulated environments and downstream tasks, both the dynamics model and policies learned by the proposed method generalize well to unseen states and the derived state abstraction improves sample efficiency compared to learning without it.