论文标题

从人类示范中学习连续的抓握功能

Learning Continuous Grasping Function with a Dexterous Hand from Human Demonstrations

论文作者

Ye, Jianglong, Wang, Jiashun, Huang, Binghao, Qin, Yuzhe, Wang, Xiaolong

论文摘要

我们建议学习使用隐式功能通过灵巧的手来产生抓握运动来进行操纵。通过连续的时间输入,该模型可以生成连续且平滑的抓握计划。我们命名了建议的模型连续掌握函数(CGF)。 CGF是通过使用3D人类演示的有条件变异自动编码器的生成建模来学习的。我们将首先将大规模的人类对象相互作用轨迹转换为机器人示范,然后使用这些演示训练CGF。在推断期间,我们使用CGF进行采样,以在模拟器中生成不同的抓握计划,然后选择成功的抓握计划以转移到真实的机器人中。通过培训各种人类数据,我们的CGF允许概括来操纵多个对象。与以前的计划算法相比,CGF更有效,并且在与真正的Allegro手转移到抓地力时,成功率的取得了显着提高。我们的项目页面可在https://jianglongye.com/cgf上找到。

We propose to learn to generate grasping motion for manipulation with a dexterous hand using implicit functions. With continuous time inputs, the model can generate a continuous and smooth grasping plan. We name the proposed model Continuous Grasping Function (CGF). CGF is learned via generative modeling with a Conditional Variational Autoencoder using 3D human demonstrations. We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF. During inference, we perform sampling with CGF to generate different grasping plans in the simulator and select the successful ones to transfer to the real robot. By training on diverse human data, our CGF allows generalization to manipulate multiple objects. Compared to previous planning algorithms, CGF is more efficient and achieves significant improvement on success rate when transferred to grasping with the real Allegro Hand. Our project page is available at https://jianglongye.com/cgf .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源