论文标题

从虚拟现实中的动态演示中学习基于深度视觉的个性化机器人导航

Learning Depth Vision-Based Personalized Robot Navigation From Dynamic Demonstrations in Virtual Reality

论文作者

de Heuvel, Jorge, Corral, Nathan, Kreis, Benedikt, Conradi, Jacobus, Driemel, Anne, Bennewitz, Maren

论文摘要

为了获得最佳的人类机器人互动体验,机器人的导航策略应考虑到用户的个人喜好。在本文中,我们提出了一个学习框架,该框架与感知管道相辅相成,从用户演示中训练基于深度视觉的个性化导航控制器。我们的虚拟现实接口可以在用户运动下进行机器人导航轨迹的演示,以进行动态交互情况。新型的感知管道与运动预测变量结合使用了变异自动编码器。它将感知的深度图像压缩为潜在状态表示,以使学习代理在机器人的动态环境中有效地推理。在详细的分析和消融研究中,我们评估了感知管道的不同配置。为了进一步量化导航控制器的个性化质量,我们开发并应用了一个新颖的指标,以根据Fréchet距离来衡量偏好反射。我们在各种虚拟场景中讨论了机器人的导航性能,并演示了仅依赖深度图像的第一个个性化机器人导航控制器。可以在线提供一个补充视频,以突出我们的方法。

For the best human-robot interaction experience, the robot's navigation policy should take into account personal preferences of the user. In this paper, we present a learning framework complemented by a perception pipeline to train a depth vision-based, personalized navigation controller from user demonstrations. Our virtual reality interface enables the demonstration of robot navigation trajectories under motion of the user for dynamic interaction scenarios. The novel perception pipeline enrolls a variational autoencoder in combination with a motion predictor. It compresses the perceived depth images to a latent state representation to enable efficient reasoning of the learning agent about the robot's dynamic environment. In a detailed analysis and ablation study, we evaluate different configurations of the perception pipeline. To further quantify the navigation controller's quality of personalization, we develop and apply a novel metric to measure preference reflection based on the Fréchet Distance. We discuss the robot's navigation performance in various virtual scenes and demonstrate the first personalized robot navigation controller that solely relies on depth images. A supplemental video highlighting our approach is available online.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源