论文标题
使用视觉效果融合感知对可变形物体的掌握状态评估
Grasp State Assessment of Deformable Objects Using Visual-Tactile Fusion Perception
论文作者
论文摘要
人类可以迅速确定抓住可变形物体所需的力,以防止其通过视觉和触摸过度变形,这对于机器人来说仍然是一项艰巨的任务。为了解决这个问题,我们提出了一种基于3D卷积的新型视觉诱导融合深神经网络(C3D-VTFN),以评估本文中各种可变形物体的掌握状态。具体而言,我们将可变形物体的掌握状态分为三类滑动,适当和过度。此外,用于培训和测试该提议网络的数据集是通过在16个各种可变形物体上具有不同宽度和力的大量抓握和提起实验来构建的,并配备了带有腕带摄像头和触觉传感器的机器人手臂。结果,达到了高达99.97%的分类精度。此外,本文实施了一些基于提议的网络的微妙掌握实验。实验结果表明,C3D-VTFN足够准确有效,足以掌握状态评估,可以将其广泛应用于自动力量控制,适应性掌握和其他视觉典型时空时空序列学习问题。
Humans can quickly determine the force required to grasp a deformable object to prevent its sliding or excessive deformation through vision and touch, which is still a challenging task for robots. To address this issue, we propose a novel 3D convolution-based visual-tactile fusion deep neural network (C3D-VTFN) to evaluate the grasp state of various deformable objects in this paper. Specifically, we divide the grasp states of deformable objects into three categories of sliding, appropriate and excessive. Also, a dataset for training and testing the proposed network is built by extensive grasping and lifting experiments with different widths and forces on 16 various deformable objects with a robotic arm equipped with a wrist camera and a tactile sensor. As a result, a classification accuracy as high as 99.97% is achieved. Furthermore, some delicate grasp experiments based on the proposed network are implemented in this paper. The experimental results demonstrate that the C3D-VTFN is accurate and efficient enough for grasp state assessment, which can be widely applied to automatic force control, adaptive grasping, and other visual-tactile spatiotemporal sequence learning problems.