论文标题

关于自我监督的单眼估计的强大跨视图一致性

On Robust Cross-View Consistency in Self-Supervised Monocular Depth Estimation

论文作者

Zhao, Haimei, Zhang, Jing, Chen, Zhuo, Yuan, Bo, Tao, Dacheng

论文摘要

通过探索跨视图一致性,例如,光度一致性和3D点云的一致性,在自我监督的单眼深度估计(SS-MDE)中取得了显着进步。但是,它们非常容易受到照明方差,遮挡,无纹理区域以及移动对象的影响,使它们不够强大,无法处理各种场景。为了应对这一挑战,我们在本文中研究了两种强大的跨视图一致性。首先,相邻帧之间的空间偏移场是通过通过可变形对齐来从其邻居中重建参考框的,该比对通过深度特征对齐(DFA)损耗来对齐时间深度特征。其次,计算每个参考框架及其附近框架的3D点云,并转换为体素空间,在该空间中,计算每个体素中的点密度,并通过Voxel密度比对(VDA)损耗进行对齐。通过这种方式,我们利用了SS-MDE的深度特征空间和3D素素空间的时间连贯性,将“点对点”对齐范式转移到“区域到区域”。与光度一致性损失以及刚性点云对齐损失相比,由于深度特征的强大代表能力以及对上述挑战的高度公差,提出的DFA和VDA损失更加强大。几个户外基准的实验结果表明,我们的方法的表现优于当前最新技术。广泛的消融研究和分析验证了拟议损失的有效性,尤其是在具有挑战性的场景中。代码和模型可在https://github.com/sunnyhelen/rcvc-depth上找到。

Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a Depth Feature Alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a Voxel Density Alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the "point-to-point" alignment paradigm to the "region-to-region" one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源