论文标题
无人驾驶:通过光滑的轮廓损失估算3D姿势估算的感性无人机辅助数据集综合
DronePose: Photorealistic UAV-Assistant Dataset Synthesis for 3D Pose Estimation via a Smooth Silhouette Loss
论文作者
论文摘要
在这项工作中,我们将无人机视为支持人类用户运营的合作社。在这种情况下,无人机助手的3D本地化是一项重要任务,可以促进用户和无人机之间的空间信息交换。要以数据驱动的方式解决此问题,我们设计了一个数据综合管道,以创建一个现实的多模式数据集,其中既包含Exentric用户视图又包含Egicentric UAV视图。然后,我们利用光真逼真和合成的输入的联合可用性来训练单眼姿势估计模型。在训练过程中,我们利用可区分的渲染来补充最先进的直接回归目标,并具有新颖的光滑轮廓损失。我们的结果表明,其定性和定量性能取得了传统的轮廓目标。我们的数据和代码可在https://vcl3d.github.io/dronepose上找到
In this work we consider UAVs as cooperative agents supporting human users in their operations. In this context, the 3D localisation of the UAV assistant is an important task that can facilitate the exchange of spatial information between the user and the UAV. To address this in a data-driven manner, we design a data synthesis pipeline to create a realistic multimodal dataset that includes both the exocentric user view, and the egocentric UAV view. We then exploit the joint availability of photorealistic and synthesized inputs to train a single-shot monocular pose estimation model. During training we leverage differentiable rendering to supplement a state-of-the-art direct regression objective with a novel smooth silhouette loss. Our results demonstrate its qualitative and quantitative performance gains over traditional silhouette objectives. Our data and code are available at https://vcl3d.github.io/DronePose