论文标题
结构PLP-SLAM:使用点,线和平面的有效稀疏映射和定位,用于单眼,RGB-D和立体声摄像机
Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras
论文作者
论文摘要
本文提出了一个视觉大满贯系统,该系统使用点和线条来实现可靠的相机定位,并同时执行环境的平面重建(PPR),以实时提供结构图。通过单眼相机并行跟踪和映射的最大挑战之一是在重建几何原语时保持尺度一致。这进一步引入了捆绑调整(BA)步骤的图形优化方面的困难。我们通过在重建的线和飞机上提出几个运行时优化来解决这些问题。除了单眼设置外,我们的系统还可以使用深度和立体声传感器运行。我们提出的大满贯紧密结合了语义和几何特征,以增强前端姿势跟踪和后端地图优化。我们在各种数据集上详尽地评估了我们的系统,并表明我们在轨迹精度方面均优于最先进的方法。 PLP-SLAM守则已在研究界的开源源(https://github.com/peterfws/structure-plp-slam)中提供。
This paper presents a visual SLAM system that uses both points and lines for robust camera localization, and simultaneously performs a piece-wise planar reconstruction (PPR) of the environment to provide a structural map in real-time. One of the biggest challenges in parallel tracking and mapping with a monocular camera is to keep the scale consistent when reconstructing the geometric primitives. This further introduces difficulties in graph optimization of the bundle adjustment (BA) step. We solve these problems by proposing several run-time optimizations on the reconstructed lines and planes. Our system is able to run with depth and stereo sensors in addition to the monocular setting. Our proposed SLAM tightly incorporates the semantic and geometric features to boost both frontend pose tracking and backend map optimization. We evaluate our system exhaustively on various datasets, and show that we outperform state-of-the-art methods in terms of trajectory precision. The code of PLP-SLAM has been made available in open-source for the research community (https://github.com/PeterFWS/Structure-PLP-SLAM).