论文标题
机器人运动计划作为视频预测:基于时空的神经网络运动计划者
Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural Network-based Motion Planner
论文作者
论文摘要
由于NN模型的强大学习能力及其固有的高平行性,基于神经网络(NN)的方法已成为机器人运动计划的有吸引力的方法。尽管目前朝这个方向发展,但以直接和同时的方式对重要的顺序和空间信息的有效捕获和处理仍然相对较小。为了克服挑战并释放神经网络对运动计划任务的潜力,在本文中,我们提出了STP-NET,这是一个端到端的学习框架,可以充分提取并利用重要的时空信息来形成有效的神经运动计划者。通过将机器人的移动解释为视频剪辑,机器人运动计划被转换为视频预测任务,STP-NET可以在空间和时间上有效的方式执行。 STP-NET在不同的和看不见的环境之间进行了经验评估表明,凭借近100%的准确性(又称成功率),STP-NET在计划速度和路径成本方面表现出非常有希望的表现。与现有的基于NN的运动计划者相比,STP-NET在2D随机森林,2D迷宫和3D随机森林环境上至少达到5倍,2.6倍和1.8倍的速度,速度较低。此外,在多机器人运动计划任务中,STP-NET可以快速,同时计算多个近乎最佳的路径
Neural network (NN)-based methods have emerged as an attractive approach for robot motion planning due to strong learning capabilities of NN models and their inherently high parallelism. Despite the current development in this direction, the efficient capture and processing of important sequential and spatial information, in a direct and simultaneous way, is still relatively under-explored. To overcome the challenge and unlock the potentials of neural networks for motion planning tasks, in this paper, we propose STP-Net, an end-to-end learning framework that can fully extract and leverage important spatio-temporal information to form an efficient neural motion planner. By interpreting the movement of the robot as a video clip, robot motion planning is transformed to a video prediction task that can be performed by STP-Net in both spatially and temporally efficient ways. Empirical evaluations across different seen and unseen environments show that, with nearly 100% accuracy (aka, success rate), STP-Net demonstrates very promising performance with respect to both planning speed and path cost. Compared with existing NN-based motion planners, STP-Net achieves at least 5x, 2.6x and 1.8x faster speed with lower path cost on 2D Random Forest, 2D Maze and 3D Random Forest environments, respectively. Furthermore, STP-Net can quickly and simultaneously compute multiple near-optimal paths in multi-robot motion planning tasks