论文标题

视觉效果传感用于掌握实时液体体积估算

Visual-Tactile Sensing for Real-time Liquid Volume Estimation in Grasping

论文作者

Zhu, Fan, Jia, Ruixing, Yang, Lei, Yan, Youcan, Wang, Zheng, Pan, Jia, Wang, Wenping

论文摘要

我们提出了一个深层的视觉效果模型,以实时估算可变形容器内部的液体,以一个本体感受的方式进行。我们融合了两种感觉方式,即,来自RGB摄像机的原始视觉输入以及来自我们特定触觉传感器的触觉提示,没有任何额外的传感器。我们工作的主要贡献和新颖性列出如下:1)通过开发具有多模式卷积网络的端到端预测模型来探索一种液体体积估计的本体感受方式,在实验验证中,该模型在高精度中获得了高度的误差。 2)提出了一个多任务学习体系结构,该体系结构全面考虑分类和回归任务的损失,并相对评估收集的数据和实际机器人平台上每个变体的性能。 3)利用本体感受的机器人系统准确地服务和控制所需的液体,该液体可连续流入可变形的容器中。 4)根据实时液体体积预测,自适应调整抓地力计划,以实现更稳定的抓握和操纵。

We propose a deep visuo-tactile model for realtime estimation of the liquid inside a deformable container in a proprioceptive way.We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor without any extra sensor calibrations.The robotic system is well controlled and adjusted based on the estimation model in real time. The main contributions and novelties of our work are listed as follows: 1) Explore a proprioceptive way for liquid volume estimation by developing an end-to-end predictive model with multi-modal convolutional networks, which achieve a high precision with an error of around 2 ml in the experimental validation. 2) Propose a multi-task learning architecture which comprehensively considers the losses from both classification and regression tasks, and comparatively evaluate the performance of each variant on the collected data and actual robotic platform. 3) Utilize the proprioceptive robotic system to accurately serve and control the requested volume of liquid, which is continuously flowing into a deformable container in real time. 4) Adaptively adjust the grasping plan to achieve more stable grasping and manipulation according to the real-time liquid volume prediction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源