论文标题
价值求和:基于MPC的基于模型的增强学习的新颖评分功能
Value Summation: A Novel Scoring Function for MPC-based Model-based Reinforcement Learning
论文作者
论文摘要
本文为基于MPC的强化学习方法的计划模块提出了一个新的评分功能,以解决使用奖励功能得分轨迹的固有偏见。所提出的方法使用折扣价值之和来提高现有基于MPC的MBRL方法的学习效率。该方法利用最佳轨迹来指导政策学习,并根据现实世界更新其州行动价值函数,并增强板上数据。在选定的Mujoco健身环境中评估了所提出方法的学习效率,以及在学习的模拟机器人模型中学习运动技能。结果表明,所提出的方法在学习效率和平均奖励回报方面优于当前的最新算法。
This paper proposes a novel scoring function for the planning module of MPC-based reinforcement learning methods to address the inherent bias of using the reward function to score trajectories. The proposed method enhances the learning efficiency of existing MPC-based MBRL methods using the discounted sum of values. The method utilizes optimal trajectories to guide policy learning and updates its state-action value function based on real-world and augmented onboard data. The learning efficiency of the proposed method is evaluated in selected MuJoCo Gym environments as well as in learning locomotion skills for a simulated model of the Cassie robot. The results demonstrate that the proposed method outperforms the current state-of-the-art algorithms in terms of learning efficiency and average reward return.