论文标题

CERBERUS:敏捷运动的低饮用视觉惯性训练率

Cerberus: Low-Drift Visual-Inertial-Leg Odometry For Agile Locomotion

论文作者

Yang, Shuo, Zhang, Zixin, Fu, Zhengyu, Manchester, Zachary

论文摘要

我们为腿部机器人提供了一个开源的视觉惯性训练式(VILO)状态估计解决方案Cerberus,该机器人使用一组标准传感器(包括立体摄像机,IMU,关节编码器和接触传感器)实时实时估算各个领域的位置。除了估计机器人状态外,我们还执行在线运动学参数校准并接触离群值拒绝以大大减少位置漂移。在各种室内和室外环境中进行的硬件实验验证了校准Cerberus中的运动学参数可以将估计漂移降低到长距离高速运动中的低于1%。我们的漂移结果比文献中报道的相同的一组传感器组比任何其他状态估计方法都要好。此外,即使机器人经历了巨大的影响和摄像头遮挡,我们的状态估计器也表现良好。状态估计器的实现以及用于计算我们结果的数据集,可在https://github.com/shuoyangrobotics/cerberus上获得。

We present an open-source Visual-Inertial-Leg Odometry (VILO) state estimation solution, Cerberus, for legged robots that estimates position precisely on various terrains in real time using a set of standard sensors, including stereo cameras, IMU, joint encoders, and contact sensors. In addition to estimating robot states, we also perform online kinematic parameter calibration and contact outlier rejection to substantially reduce position drift. Hardware experiments in various indoor and outdoor environments validate that calibrating kinematic parameters within the Cerberus can reduce estimation drift to lower than 1% during long distance high speed locomotion. Our drift results are better than any other state estimation method using the same set of sensors reported in the literature. Moreover, our state estimator performs well even when the robot is experiencing large impacts and camera occlusion. The implementation of the state estimator, along with the datasets used to compute our results, are available at https://github.com/ShuoYangRobotics/Cerberus.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源