论文标题
部分可观测时空混沌系统的无模型预测
Exploration Policies for On-the-Fly Controller Synthesis: A Reinforcement Learning Approach
论文作者
论文摘要
控制器合成本质上是一种基于模型的计划,针对非确定性环境的计划(实际上是“策略”)旨在无限期地保留系统目标。在监督控制环境的情况下,指定为状态机器的平行组成,有效的策略必须为“非阻止”(即,始终使环境能够到达某些标记的状态),除了安全(即将系统保持在安全区域内)。最近,提出了直接指示的控制器合成技术,以避免探索整个整个和指数的大环境空间,以非最大程度的允许性为代价,以找到策略或得出结论。目前,植物的增量探索是由独立于域的人类设计的启发式指导的。在这项工作中,我们提出了一种基于强化学习(RL)的启发式方法的新方法。因此,综合算法被构架为具有无界动作空间的RL任务,并使用了修改的DQN版本。借助一个简单而一般的功能,可以抽象出状态和动作,我们表明,可以在一个概括到较大实例的问题的小版本上学习启发式方法,从而有效地进行零摄影的策略转移。我们的代理商在一项高度可观察到的RL任务中从头开始学习,在培训期间看不见的情况下,总体上超越了现有的启发式总体。
Controller synthesis is in essence a case of model-based planning for non-deterministic environments in which plans (actually ''strategies'') are meant to preserve system goals indefinitely. In the case of supervisory control environments are specified as the parallel composition of state machines and valid strategies are required to be ''non-blocking'' (i.e., always enabling the environment to reach certain marked states) in addition to safe (i.e., keep the system within a safe zone). Recently, On-the-fly Directed Controller Synthesis techniques were proposed to avoid the exploration of the entire -and exponentially large-environment space, at the cost of non-maximal permissiveness, to either find a strategy or conclude that there is none. The incremental exploration of the plant is currently guided by a domain-independent human-designed heuristic. In this work, we propose a new method for obtaining heuristics based on Reinforcement Learning (RL). The synthesis algorithm is thus framed as an RL task with an unbounded action space and a modified version of DQN is used. With a simple and general set of features that abstracts both states and actions, we show that it is possible to learn heuristics on small versions of a problem that generalize to the larger instances, effectively doing zero-shot policy transfer. Our agents learn from scratch in a highly partially observable RL task and outperform the existing heuristic overall, in instances unseen during training.