论文标题

学会安全:与安全评论家的深度RL

Learning to be Safe: Deep RL with a Safety Critic

论文作者

Srinivasan, Krishnan, Eysenbach, Benjamin, Ha, Sehoon, Tan, Jie, Finn, Chelsea

论文摘要

安全是在现实世界中部署强化学习(RL)算法的重要组成部分,并且在学习过程本身中至关重要。安全RL的一种自然的第一种方法是手动指定对政策行为的约束。但是,正如学习能够在AI系统的大规模开发中取得进展一样,学习安全规范也可能是确保在凌乱的开放世界环境中的安全性,在这些环境中,手动安全规范无法扩展。类似于人类如何从儿童安全环境开始学习,我们建议学习如何在一组任务和环境中保持安全,然后使用该学习的直觉来限制学习新的,修改的任务时的未来行为。我们经验研究了在三个具有挑战性的领域中的这种形式的安全受限的转移学习形式:模拟导航,四倍的运动和灵巧的手持操作。与标准的深度RL技术和安全RL的先前方法相比,我们发现我们的方法能够学习新任务,并且在安全事件较少的新环境中,例如掉落或掉落对象,更快,更稳定的学习。这不仅为更安全的RL系统,而且对于更有效的RL系统,这表明了前进的道路。

Safety is an essential component for deploying reinforcement learning (RL) algorithms in real-world scenarios, and is critical during the learning process itself. A natural first approach toward safe RL is to manually specify constraints on the policy's behavior. However, just as learning has enabled progress in large-scale development of AI systems, learning safety specifications may also be necessary to ensure safety in messy open-world environments where manual safety specifications cannot scale. Akin to how humans learn incrementally starting in child-safe environments, we propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors when learning new, modified tasks. We empirically study this form of safety-constrained transfer learning in three challenging domains: simulated navigation, quadruped locomotion, and dexterous in-hand manipulation. In comparison to standard deep RL techniques and prior approaches to safe RL, we find that our method enables the learning of new tasks and in new environments with both substantially fewer safety incidents, such as falling or dropping an object, and faster, more stable learning. This suggests a path forward not only for safer RL systems, but also for more effective RL systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源