论文标题

用于多步任务的有条件视觉伺服

Conditional Visual Servoing for Multi-Step Tasks

论文作者

Izquierdo, Sergio, Argus, Max, Brox, Thomas

论文摘要

Visual Sevoing已有效地将机器人移动到特定的目标位置或跟踪记录的演示。它不需要手动编程,但通常仅限于一个设置,其中一个演示映射到一个环境状态。我们提出了一种模块化方法,将视觉伺服延伸到具有多个演示序列的场景。我们将这种条件性宣誓为,因为我们选择了下一个以机器人观察为条件的演示。该方法提出了一种有吸引力的策略来解决多步骤问题,因为可以灵活地将单个示范合并为控制政策。我们提出了不同的选择功能,并在模拟中的形状分类任务上进行比较。随着再投影误差产生最佳的总体结果,我们在真实的机器人上实现了此选择功能,并显示了提出的条件伺服的功效。有关我们实验的视频,请查看我们的项目页面:https://lmb.informatik.uni-freiburg.de/projects/conditional_servoing/

Visual Servoing has been effectively used to move a robot into specific target locations or to track a recorded demonstration. It does not require manual programming, but it is typically limited to settings where one demonstration maps to one environment state. We propose a modular approach to extend visual servoing to scenarios with multiple demonstration sequences. We call this conditional servoing, as we choose the next demonstration conditioned on the observation of the robot. This method presents an appealing strategy to tackle multi-step problems, as individual demonstrations can be combined flexibly into a control policy. We propose different selection functions and compare them on a shape-sorting task in simulation. With the reprojection error yielding the best overall results, we implement this selection function on a real robot and show the efficacy of the proposed conditional servoing. For videos of our experiments, please check out our project page: https://lmb.informatik.uni-freiburg.de/projects/conditional_servoing/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源