论文标题
奖励奖金通过迭代加深搜索启发的增益安排
Reward Bonuses with Gain Scheduling Inspired by Iterative Deepening Search
论文作者
论文摘要
本文介绍了一种新颖的方法,将固有的奖金添加到以任务为导向的奖励功能中,以便有效地促进加强学习搜索。尽管已经设计了各种奖金,但它们类似于图理论中的深度优先和广度优先搜索算法。因此,本文首先为每个论文设计了两个奖金。然后,将启发式增益计划应用于设计的奖金,灵感来自迭代加深搜索,该搜索已知可以继承两种搜索算法的优势。预期该提出的方法可以通过逐渐探索未知状态来有效地在更深层次的状态下达到最佳解决方案。在具有浓厚奖励的三个运动任务和三个具有稀疏奖励的简单任务中,这表明两种奖金类型有助于改进不同任务的性能。此外,通过将它们与建议的增益计划结合使用,可以通过高性能完成所有任务。
This paper introduces a novel method of adding intrinsic bonuses to task-oriented reward function in order to efficiently facilitate reinforcement learning search. While various bonuses have been designed to date, they are analogous to the depth-first and breadth-first search algorithms in graph theory. This paper, therefore, first designs two bonuses for each of them. Then, a heuristic gain scheduling is applied to the designed bonuses, inspired by the iterative deepening search, which is known to inherit the advantages of the two search algorithms. The proposed method is expected to allow agent to efficiently reach the best solution in deeper states by gradually exploring unknown states. In three locomotion tasks with dense rewards and three simple tasks with sparse rewards, it is shown that the two types of bonuses contribute to the performance improvement of the different tasks complementarily. In addition, by combining them with the proposed gain scheduling, all tasks can be accomplished with high performance.