论文标题

异步RGB-D序列的自我对准深度调控辐射场

Self-Aligning Depth-regularized Radiance Fields for Asynchronous RGB-D Sequences

论文作者

Huang, Yuxin, Yang, Andong, Wu, Zirui, Chen, Yuantao, Yang, Runyi, Zhu, Zhenxin, Hou, Chao, Zhao, Hao, Zhou, Guyue

论文摘要

已经表明,具有深度渲染和深度监督的学习辐射场可以有效地促进视图合成的质量和收敛性。但是,此范式要求将输入RGB-D序列同步,从而阻碍其在无人机城市建模方案中使用。由于高速飞行引起的RGB图像和深度图像之间存在异步,我们提出了一种新颖的时置函数,这是一个隐式网络,它将时间戳映射到$ \ rm se(3)$元素。为了简化训练过程,我们还设计了一个联合优化方案,以共同学习大规模的深度调查辐射场和时置函数。我们的算法由三个步骤组成:(1)时置函数拟合,(2)辐射率野外引导,(3)关节姿势误差补偿和辐射率场的细化。此外,我们提出了一个大型合成数据集,具有多种受控的不匹配和地面真相,以系统地评估这一新问题。通过广泛的实验,我们证明我们的方法在没有正则化的情况下优于基准。我们还显示了无人机捕获的现实世界异步RGB-D序列的质量改进结果。代码,数据和模型将公开可用。

It has been shown that learning radiance fields with depth rendering and depth supervision can effectively promote the quality and convergence of view synthesis. However, this paradigm requires input RGB-D sequences to be synchronized, hindering its usage in the UAV city modeling scenario. As there exists asynchrony between RGB images and depth images due to high-speed flight, we propose a novel time-pose function, which is an implicit network that maps timestamps to $\rm SE(3)$ elements. To simplify the training process, we also design a joint optimization scheme to jointly learn the large-scale depth-regularized radiance fields and the time-pose function. Our algorithm consists of three steps: (1) time-pose function fitting, (2) radiance field bootstrapping, (3) joint pose error compensation and radiance field refinement. In addition, we propose a large synthetic dataset with diverse controlled mismatches and ground truth to evaluate this new problem setting systematically. Through extensive experiments, we demonstrate that our method outperforms baselines without regularization. We also show qualitatively improved results on a real-world asynchronous RGB-D sequence captured by drone. Codes, data, and models will be made publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源