论文标题

CMRNET ++:LIDAR地图中的地图和摄像机无知的单眼定位

CMRNet++: Map and Camera Agnostic Monocular Visual Localization in LiDAR Maps

论文作者

Cattaneo, Daniele, Sorrenti, Domenico Giorgio, Valada, Abhinav

论文摘要

本地化是自主机器人的至关重要的至关重要的推动者。尽管深度学习在许多计算机视觉任务中取得了长足的进步,但仍未对提高度量视觉定位能力产生相当大的影响。主要的障碍之一是无法现有的基于卷积神经网络(CNN)基于基于卷积的姿势回归方法来推广到以前看不见的地方。我们最近引入的CMRNET通过在LIDAR-MAP中启用地图独立的单眼定位来有效地解决了这一限制。在本文中,我们现在通过引入CMRNET ++来进一步迈出一步,这是一个更加可靠的模型,不仅可以有效地概括到新位置,而且还独立于相机参数。我们通过将深度学习与几何技术结合在一起,并通过将指标推理在学习过程之外移动来实现这种能力。这样,网络的权重就不与特定的相机绑定。 CMRNET ++在三个具有挑战性的自动驾驶数据集(即Kitti,Argoverse和Lyft5)上进行了广泛的评估,表明CMRNET ++的表现优于CMRNET以及其他基线。更重要的是,首次,我们演示了深度学习方法准确本地化的能力,而无需在全新的环境中进行任何重新调整或微调,而与摄像机参数无关。

Localization is a critically essential and crucial enabler of autonomous robots. While deep learning has made significant strides in many computer vision tasks, it is still yet to make a sizeable impact on improving capabilities of metric visual localization. One of the major hindrances has been the inability of existing Convolutional Neural Network (CNN)-based pose regression methods to generalize to previously unseen places. Our recently introduced CMRNet effectively addresses this limitation by enabling map independent monocular localization in LiDAR-maps. In this paper, we now take it a step further by introducing CMRNet++, which is a significantly more robust model that not only generalizes to new places effectively, but is also independent of the camera parameters. We enable this capability by combining deep learning with geometric techniques, and by moving the metric reasoning outside the learning process. In this way, the weights of the network are not tied to a specific camera. Extensive evaluations of CMRNet++ on three challenging autonomous driving datasets, i.e., KITTI, Argoverse, and Lyft5, show that CMRNet++ outperforms CMRNet as well as other baselines by a large margin. More importantly, for the first-time, we demonstrate the ability of a deep learning approach to accurately localize without any retraining or fine-tuning in a completely new environment and independent of the camera parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源