论文标题

X-NERF:多场曲子360 $^{\ circ} $不足RGB-D视图的显式神经辐射场

X-NeRF: Explicit Neural Radiance Field for Multi-Scene 360$^{\circ} $ Insufficient RGB-D Views

论文作者

Zhu, Haoyi, Fang, Hao-Shu, Lu, Cewu

论文摘要

尽管神经辐射场(NERFS)在新型视图合成方面表现出色,但通常需要密集的输入视图。许多论文分别为每个场景培训一个模型,其中很少有人探索将多模式数据纳入此问题的模型。在本文中,我们专注于很少讨论但重要的环境:我们可以训练一个可以代表多个场景的模型,而360 $^\ circ $视图和RGB-D图像不足?我们将视图不足以引用几乎极为稀疏和几乎不重叠的视图。为了处理它,提出了一种完全明确的方法,该方法可以学习一般场景完成过程而不是基于坐标的映射。鉴于一些不足的RGB​​-D输入视图,X-NERF首先将它们转换为稀疏点云张量,然后应用3D稀疏生成卷积神经网络(CNN),将其完成为显式辐射场,其体积渲染可以在不迅速运行网络的情况下进行,而无需快速运行网络。为了避免过度拟合,除了常见的渲染损失外,我们还采用知觉损失以及通过点云上的随机旋转来查看增强。所提出的方法在我们的环境中显着超过先前的隐式方法,这表明提出的问题和方法的巨大潜力。代码和数据可在https://github.com/haoyizhu/xnerf上找到。

Neural Radiance Fields (NeRFs), despite their outstanding performance on novel view synthesis, often need dense input views. Many papers train one model for each scene respectively and few of them explore incorporating multi-modal data into this problem. In this paper, we focus on a rarely discussed but important setting: can we train one model that can represent multiple scenes, with 360$^\circ $ insufficient views and RGB-D images? We refer insufficient views to few extremely sparse and almost non-overlapping views. To deal with it, X-NeRF, a fully explicit approach which learns a general scene completion process instead of a coordinate-based mapping, is proposed. Given a few insufficient RGB-D input views, X-NeRF first transforms them to a sparse point cloud tensor and then applies a 3D sparse generative Convolutional Neural Network (CNN) to complete it to an explicit radiance field whose volumetric rendering can be conducted fast without running networks during inference. To avoid overfitting, besides common rendering loss, we apply perceptual loss as well as view augmentation through random rotation on point clouds. The proposed methodology significantly out-performs previous implicit methods in our setting, indicating the great potential of proposed problem and approach. Codes and data are available at https://github.com/HaoyiZhu/XNeRF.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源