论文标题

塔尔坦沃:可推广的基于学习的VO

TartanVO: A Generalizable Learning-based VO

论文作者

Wang, Wenshan, Hu, Yaoyu, Scherer, Sebastian

论文摘要

我们提出了第一个基于学习的视觉探针学(VO)模型,该模型在挑战性的场景中概括了多个数据集和现实世界情景,并优于基于几何的方法。我们通过利用SLAM数据集塔塔尔(Tartanair)来实现这一目标,该数据集tartanair在具有挑战性的环境中提供了大量不同的合成数据。此外,为了使我们的VO模型跨数据集概括,我们提出了一个最新的损耗函数,并将摄像机内在参数纳入模型。实验表明,只能对合成数据进行训练的单个模型Tartanvo可以推广到Kitti和Euroc等现实世界中的数据集,从而证明了基于几何形状的挑战轨迹的方法具有显着优势。我们的代码可在https://github.com/castacks/tartanvo上找到。

We present the first learning-based visual odometry (VO) model, which generalizes to multiple datasets and real-world scenarios and outperforms geometry-based methods in challenging scenes. We achieve this by leveraging the SLAM dataset TartanAir, which provides a large amount of diverse synthetic data in challenging environments. Furthermore, to make our VO model generalize across datasets, we propose an up-to-scale loss function and incorporate the camera intrinsic parameters into the model. Experiments show that a single model, TartanVO, trained only on synthetic data, without any finetuning, can be generalized to real-world datasets such as KITTI and EuRoC, demonstrating significant advantages over the geometry-based methods on challenging trajectories. Our code is available at https://github.com/castacks/tartanvo.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源