论文标题
在部分可观察性下通过屏蔽性进行安全加强学习
Safe Reinforcement Learning via Shielding under Partial Observability
论文作者
论文摘要
安全探索是强化学习(RL)的一个常见问题,旨在防止代理在探索环境时做出灾难性的决定。这个问题的方法家庭以该环境的(部分)模型的形式假设域知识来决定行动的安全性。所谓的盾牌迫使RL代理仅选择安全的动作。但是,要在各种应用中采用,必须超越执行安全性,还必须确保RL的适用性良好。我们通过与最先进的深度RL的紧密整合扩展了盾牌的适用性,并在局部可观察性下具有挑战性,稀疏的奖励环境提供了广泛的经验研究。我们表明,经过精心整合的盾牌可确保安全性,并可以提高RL代理的收敛速度和最终性能。我们此外表明,可以使用盾牌来引导最先进的RL代理:在屏蔽环境中初次学习后它们保持安全,从而使我们最终可以禁用潜在的过于保守的盾牌。
Safe exploration is a common problem in reinforcement learning (RL) that aims to prevent agents from making disastrous decisions while exploring their environment. A family of approaches to this problem assume domain knowledge in the form of a (partial) model of this environment to decide upon the safety of an action. A so-called shield forces the RL agent to select only safe actions. However, for adoption in various applications, one must look beyond enforcing safety and also ensure the applicability of RL with good performance. We extend the applicability of shields via tight integration with state-of-the-art deep RL, and provide an extensive, empirical study in challenging, sparse-reward environments under partial observability. We show that a carefully integrated shield ensures safety and can improve the convergence rate and final performance of RL agents. We furthermore show that a shield can be used to bootstrap state-of-the-art RL agents: they remain safe after initial learning in a shielded setting, allowing us to disable a potentially too conservative shield eventually.