论文标题
众包3D映射:组合的多视图几何学和自我监督的学习方法
Crowdsourced 3D Mapping: A Combined Multi-View Geometry and Self-Supervised Learning Approach
论文作者
论文摘要
有效利用众包视觉数据的能力为大规模动态映射和自动驾驶的领域带来了巨大的潜力。但是,众包3D映射的最新方法假定相机内在的知识。在这项工作中,我们提出了一个框架,该框架估算了语义上有意义的地标的3D位置,例如交通标志,而无需使用单眼颜色摄像头和GPS,而无需假设已知的相机内在。我们利用多视图几何形状以及基于深度学习的自我校准,深度和自我运动估计的交通标志定位,并表明将其优势结合起来对于增加地图覆盖率很重要。为了促进有关此任务的研究,我们构建并提供基于KITTI的3D交通符号地面真相定位数据集。使用我们提出的框架,我们在此数据集上达到了平均单朱尼亲戚和绝对定位精度为39厘米和126万。
The ability to efficiently utilize crowdsourced visual data carries immense potential for the domains of large scale dynamic mapping and autonomous driving. However, state-of-the-art methods for crowdsourced 3D mapping assume prior knowledge of camera intrinsics. In this work, we propose a framework that estimates the 3D positions of semantically meaningful landmarks such as traffic signs without assuming known camera intrinsics, using only monocular color camera and GPS. We utilize multi-view geometry as well as deep learning based self-calibration, depth, and ego-motion estimation for traffic sign positioning, and show that combining their strengths is important for increasing the map coverage. To facilitate research on this task, we construct and make available a KITTI based 3D traffic sign ground truth positioning dataset. Using our proposed framework, we achieve an average single-journey relative and absolute positioning accuracy of 39cm and 1.26m respectively, on this dataset.