论文标题
D-Inloc ++:在动态环境中的室内定位
D-InLoc++: Indoor Localization in Dynamic Environments
论文作者
论文摘要
大多数最先进的定位算法都依赖于稳健的相对姿势估计和几何验证来获得移动对象不可知的摄像机在复杂的室内环境中姿势。但是,如果场景包含重复的结构,例如书桌,桌子,盒子或移动的人,则这种方法容易犯错。我们表明,可移动的对象包含不可忽略的本地化误差,并提出了一种新的直接方法,以预测六度自由的(6DOF)更加坚固。我们为定位管道INLOC配备了实例分割网络yolact ++。动态对象的口罩用于相对姿势估计步骤以及摄像头姿势建议的最终分类。首先,我们过滤出放置在动态对象蒙版上的匹配。其次,我们跳过了与移动对象相关的区域上查询和合成图像的比较。此过程导致更强大的本地化。最后,我们描述和改善了由合成图像和查询图像之间的基于梯度的比较引起的错误,并发布了新的管道,以模拟MatterPort扫描中具有可移动对象的环境。所有代码均可在github.com/dubenma/d-inlocpp上找到。
Most state-of-the-art localization algorithms rely on robust relative pose estimation and geometry verification to obtain moving object agnostic camera poses in complex indoor environments. However, this approach is prone to mistakes if a scene contains repetitive structures, e.g., desks, tables, boxes, or moving people. We show that the movable objects incorporate non-negligible localization error and present a new straightforward method to predict the six-degree-of-freedom (6DoF) pose more robustly. We equipped the localization pipeline InLoc with real-time instance segmentation network YOLACT++. The masks of dynamic objects are employed in the relative pose estimation step and in the final sorting of camera pose proposal. At first, we filter out the matches laying on masks of the dynamic objects. Second, we skip the comparison of query and synthetic images on the area related to the moving object. This procedure leads to a more robust localization. Lastly, we describe and improve the mistakes caused by gradient-based comparison between synthetic and query images and publish a new pipeline for simulation of environments with movable objects from the Matterport scans. All the codes are available on github.com/dubenma/D-InLocpp .