论文标题
概率的容量融合,用于密集的单眼大满贯
Probabilistic Volumetric Fusion for Dense Monocular SLAM
论文作者
论文摘要
我们提出了一种新的方法,可以通过利用深度致密的单眼猛击和快速不确定性传播来重建图像中的3D场景。所提出的方法能够3D重建场景,准确,实时地重建场景,同时从浓密的单眼大满贯发出强大到极度嘈杂的深度估计。与以前的方法不同,使用临时深度过滤器,或者估算RGB-D摄像机的传感器模型的深度不确定性,我们的概率深度不确定性直接来自Slam中基础束调节问题的信息矩阵。我们表明,所产生的深度不确定性为体积融合的深度图提供了极好的信号。没有我们的深度不确定性,所得的网格是嘈杂的,并且具有伪影,而我们的方法会产生精确的3D网格,其伪影较少。我们提供了充满挑战的Euroc数据集的结果,并表明我们的方法比直接融合单眼大满贯的深度要获得92%的准确性,而与最佳竞争方法相比,我们的方法高达90%。
We present a novel method to reconstruct 3D scenes from images by leveraging deep dense monocular SLAM and fast uncertainty propagation. The proposed approach is able to 3D reconstruct scenes densely, accurately, and in real-time while being robust to extremely noisy depth estimates coming from dense monocular SLAM. Differently from previous approaches, that either use ad-hoc depth filters, or that estimate the depth uncertainty from RGB-D cameras' sensor models, our probabilistic depth uncertainty derives directly from the information matrix of the underlying bundle adjustment problem in SLAM. We show that the resulting depth uncertainty provides an excellent signal to weight the depth-maps for volumetric fusion. Without our depth uncertainty, the resulting mesh is noisy and with artifacts, while our approach generates an accurate 3D mesh with significantly fewer artifacts. We provide results on the challenging Euroc dataset, and show that our approach achieves 92% better accuracy than directly fusing depths from monocular SLAM, and up to 90% improvements compared to the best competing approach.