论文标题
通过撤消地图形式主义转移RL
Transfer RL via the Undo Maps Formalism
论文作者
论文摘要
跨领域的知识是机器学习中最根本的问题之一,但是在强化学习的背景下这样做仍然很大程度上仍然是一个开放的问题。当前的方法对任务的细节做出了强有力的假设,通常缺乏原则目标,并且 - 至关重要的是,当域而导致的域差异时,这可能是次优的,即国家空间的漂移,即它对环境是固有的,因此会影响每个代理人相互作用。为了解决这些缺点,我们提出了TVD:通过分布匹配,一个框架传输跨交互式域的框架。我们从以数据为中心的角度来解决问题,通过(潜在复杂的)状态空间之间的转换来表征环境中的差异,从而使转移问题提出了学习的问题,以撤消这种转换。为了实现这一目标,我们基于两个分布之间的最佳传输距离引入了一个新颖的优化目标 - 轨迹是由源域中已学习的策略以及目标域中可学习的推动策略生成的。我们表明,这一目标导致了一个政策更新方案,让人联想到模仿学习,并得出了一种有效的算法来实施它。我们在简单的网格世界中进行的实验表明,这种方法可以在各种环境转换中成功传输学习。
Transferring knowledge across domains is one of the most fundamental problems in machine learning, but doing so effectively in the context of reinforcement learning remains largely an open problem. Current methods make strong assumptions on the specifics of the task, often lack principled objectives, and -- crucially -- modify individual policies, which might be sub-optimal when the domains differ due to a drift in the state space, i.e., it is intrinsic to the environment and therefore affects every agent interacting with it. To address these drawbacks, we propose TvD: transfer via distribution matching, a framework to transfer knowledge across interactive domains. We approach the problem from a data-centric perspective, characterizing the discrepancy in environments by means of (potentially complex) transformation between their state spaces, and thus posing the problem of transfer as learning to undo this transformation. To accomplish this, we introduce a novel optimization objective based on an optimal transport distance between two distributions over trajectories -- those generated by an already-learned policy in the source domain and a learnable pushforward policy in the target domain. We show this objective leads to a policy update scheme reminiscent of imitation learning, and derive an efficient algorithm to implement it. Our experiments in simple gridworlds show that this method yields successful transfer learning across a wide range of environment transformations.