论文标题
通过学习RGB-D功能来估算强大的6D对象姿势
Robust 6D Object Pose Estimation by Learning RGB-D Features
论文作者
论文摘要
准确的6D对象姿势估计是机器人操作和抓握的基础。先前的方法遵循一种局部优化方法,该方法最小化了最近对点对之间的距离,以处理对称对象的旋转歧义。在这项工作中,我们为旋转回归提出了一种新型的离散配方,以解决此局部最佳问题。我们在SO(3)中均匀地样品旋转锚固,并预测从每个锚向目标的约束偏差,以及选择最佳预测的不确定性评分。另外,通过指向3D中心的点矢量来检测对象位置。在两个基准上进行的实验:linemod和YCB-Video,表明所提出的方法的表现优于最先进的方法。我们的代码可在https://github.com/mentian/object-posenet上找到。
Accurate 6D object pose estimation is fundamental to robotic manipulation and grasping. Previous methods follow a local optimization approach which minimizes the distance between closest point pairs to handle the rotation ambiguity of symmetric objects. In this work, we propose a novel discrete-continuous formulation for rotation regression to resolve this local-optimum problem. We uniformly sample rotation anchors in SO(3), and predict a constrained deviation from each anchor to the target, as well as uncertainty scores for selecting the best prediction. Additionally, the object location is detected by aggregating point-wise vectors pointing to the 3D center. Experiments on two benchmarks: LINEMOD and YCB-Video, show that the proposed method outperforms state-of-the-art approaches. Our code is available at https://github.com/mentian/object-posenet.