论文标题

非刚性神经辐射场:从单眼视频中的动态场景的重建和新型视图综合

Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video

论文作者

Tretschk, Edgar, Tewari, Ayush, Golyanik, Vladislav, Zollhöfer, Michael, Lassner, Christoph, Theobalt, Christian

论文摘要

我们提出了非刚性神经辐射场(NR-NERF),这是一般非刚性动态场景的重建和新型视图合成方法。我们的方法将动态场景的RGB图像作为输入(例如,来自单眼视频记录),并创建高质量的时空几何形状和外观表示。我们表明,单手持式消费级相机足以合成从新颖的虚拟相机视图中的动态场景的复杂效果图,例如“子弹时间”视频效果。 NR-NERF将动态场景分解为规范的体积及其变形。场景变形被实现为射线弯曲,在该弯曲中,直射射线的变形不合格。我们还提出了一个新型的刚性网络,以更好地限制场景的刚性区域,从而取得更稳定的结果。射线弯曲和刚性网络未经明确的监督进行培训。我们的公式可以跨视图和时间进行密集的对应估计,并引人入胜的视频编辑应用,例如运动夸大。我们的代码将被开源。

We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes. Our approach takes RGB images of a dynamic scene as input (e.g., from a monocular video recording), and creates a high-quality space-time geometry and appearance representation. We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e.g. a `bullet-time' video effect. NR-NeRF disentangles the dynamic scene into a canonical volume and its deformation. Scene deformation is implemented as ray bending, where straight rays are deformed non-rigidly. We also propose a novel rigidity network to better constrain rigid regions of the scene, leading to more stable results. The ray bending and rigidity network are trained without explicit supervision. Our formulation enables dense correspondence estimation across views and time, and compelling video editing applications such as motion exaggeration. Our code will be open sourced.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源