论文标题

DA4AD:自动驾驶的端到端深度注意的视觉本地化

DA4AD: End-to-End Deep Attention-based Visual Localization for Autonomous Driving

论文作者

Zhou, Yao, Wan, Guowei, Hou, Shenhua, Yu, Li, Wang, Gang, Rui, Xiaofei, Song, Shiyu

论文摘要

我们提出了一个基于新颖的深度注意意识特征的自主驾驶的视觉定位框架,该特征可实现厘米水平的定位精度。视觉本地化问题的常规方法取决于手工制作的特征或道路上的人为物体。众所周知,它们要么容易出现严重的外观或照明变化引起的不稳定匹配,要么太稀缺了,无法在具有挑战性的情况下提供恒定而健壮的定位。在这项工作中,我们试图利用深厚的注意机制来寻找明显,独特和稳定的特征,这些功能适合通过新颖的端到端深度神经网络在现场进行长期匹配。此外,我们所学到的功能描述符被证明可以有能力建立健壮的匹配,因此成功地估计了最佳相机姿势的高精度。我们使用新鲜收集的数据集在传感器之间全面验证方法的有效性。结果表明,与基于激光雷达的本地化解决方案相比,我们的方法在各种具有挑战性的情况下达到了竞争性的定位准确性,从而导致潜在的低成本定位解决方案用于自动驾驶。

We present a visual localization framework based on novel deep attention aware features for autonomous driving that achieves centimeter level localization accuracy. Conventional approaches to the visual localization problem rely on handcrafted features or human-made objects on the road. They are known to be either prone to unstable matching caused by severe appearance or lighting changes, or too scarce to deliver constant and robust localization results in challenging scenarios. In this work, we seek to exploit the deep attention mechanism to search for salient, distinctive and stable features that are good for long-term matching in the scene through a novel end-to-end deep neural network. Furthermore, our learned feature descriptors are demonstrated to be competent to establish robust matches and therefore successfully estimate the optimal camera poses with high precision. We comprehensively validate the effectiveness of our method using a freshly collected dataset with high-quality ground truth trajectories and hardware synchronization between sensors. Results demonstrate that our method achieves a competitive localization accuracy when compared to the LiDAR-based localization solutions under various challenging circumstances, leading to a potential low-cost localization solution for autonomous driving.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源