论文标题
连续闭合3D对象姿势估计
Continuous close-range 3D object pose estimation
论文作者
论文摘要
在未来的制造线路上,拆卸固定装置将是提高自主系统在组装和物流操作中的灵活性的基本步骤。基于视觉的3D姿势估计是必须准确处理在机器人任务执行过程中可能无法放置在固定位置的对象的必要条件。工业任务为诸如困难的对象属性,紧密的周期时间和相机视图上的约束等物体的稳健姿势估算带来了多个挑战。特别是,当与对象交互时,我们必须与对象的近距离部分视图一起工作,这对基于典型的基于视图的姿势估计方法构成了新的挑战。在本文中,我们提出了一种基于梯度 - 安装粒子滤波器的3D姿势估计方法,该方法将新的观测值集成了新的观测值,以改善姿势估计值。因此,我们可以在任务执行期间在线应用此方法,以节省宝贵的周期时间。与其他基于基于视图的姿势估计方法相反,我们在完整的6维空间中对潜在视图进行建模,从而使我们能够应对近距离部分对象的视图。我们演示了实际组装任务的方法,其中算法通常在10-15迭代范围内收敛到正确的姿势,平均精度小于8mm。
In the context of future manufacturing lines, removing fixtures will be a fundamental step to increase the flexibility of autonomous systems in assembly and logistic operations. Vision-based 3D pose estimation is a necessity to accurately handle objects that might not be placed at fixed positions during the robot task execution. Industrial tasks bring multiple challenges for the robust pose estimation of objects such as difficult object properties, tight cycle times and constraints on camera views. In particular, when interacting with objects, we have to work with close-range partial views of objects that pose a new challenge for typical view-based pose estimation methods. In this paper, we present a 3D pose estimation method based on a gradient-ascend particle filter that integrates new observations on-the-fly to improve the pose estimate. Thereby, we can apply this method online during task execution to save valuable cycle time. In contrast to other view-based pose estimation methods, we model potential views in full 6- dimensional space that allows us to cope with close-range partial objects views. We demonstrate the approach on a real assembly task, in which the algorithm usually converges to the correct pose within 10-15 iterations with an average accuracy of less than 8mm.