论文标题
六-DOF GRASP姿势检测的混合物理度量
Hybrid Physical Metric For 6-DoF Grasp Pose Detection
论文作者
论文摘要
在智能机器人领域,多盖和多对象的6-DOF抓姿势检测是一项挑战任务。为了模仿人类推理能力以抓住对象,广泛研究了数据驱动的方法。随着大规模数据集的引入,我们发现单个物理指标通常会产生几个离散水平的掌握置信分数,这无法细分区分数百万的抓地力,并导致不准确的预测结果。在本文中,我们提出了一个混合物理指标来解决此评估不足。首先,我们定义一个新的度量标准是基于力闭合度量的,并通过对象平坦,重力和碰撞的测量来补充。其次,我们利用这种混合物理指标来产生精致的置信度评分。第三,为了有效地学习新的置信度得分,我们设计了一个多分辨率网络,称为Flatness Gravity Collision Graspnet(FGC-GraspNet)。 FGC-GRASPNET提出了多个任务的多分辨率特征学习体系结构,并引入了一种新的关节损失函数,从而增强了GRASP检测的平均精度。网络评估和足够的实际机器人实验证明了我们混合物理指标和FGC-Graspnet的有效性。我们的方法在现实世界中杂乱无章的场景中达到了90.5%的成功率。我们的代码可在https://github.com/luyh20/fgc-graspnet上找到。
6-DoF grasp pose detection of multi-grasp and multi-object is a challenge task in the field of intelligent robot. To imitate human reasoning ability for grasping objects, data driven methods are widely studied. With the introduction of large-scale datasets, we discover that a single physical metric usually generates several discrete levels of grasp confidence scores, which cannot finely distinguish millions of grasp poses and leads to inaccurate prediction results. In this paper, we propose a hybrid physical metric to solve this evaluation insufficiency. First, we define a novel metric is based on the force-closure metric, supplemented by the measurement of the object flatness, gravity and collision. Second, we leverage this hybrid physical metric to generate elaborate confidence scores. Third, to learn the new confidence scores effectively, we design a multi-resolution network called Flatness Gravity Collision GraspNet (FGC-GraspNet). FGC-GraspNet proposes a multi-resolution features learning architecture for multiple tasks and introduces a new joint loss function that enhances the average precision of the grasp detection. The network evaluation and adequate real robot experiments demonstrate the effectiveness of our hybrid physical metric and FGC-GraspNet. Our method achieves 90.5\% success rate in real-world cluttered scenes. Our code is available at https://github.com/luyh20/FGC-GraspNet.