论文标题
4D视图合成和视频处理的神经辐射流
Neural Radiance Flow for 4D View Synthesis and Video Processing
论文作者
论文摘要
我们提出了一种方法,即神经辐射流(NERFLOW),以从一组RGB图像中学习动态场景的4D时空表示。我们方法的关键是使用神经隐式表示,该表示可以捕获场景的3D占用,辐射和动态。通过跨不同模式的一致性,我们的表示可以在各种动态场景中进行多视图渲染,包括浇注水,机器人交互和真实图像,超过了空间时空视图合成的最新方法。即使仅用一台相机捕获输入图像,我们的方法也起作用。我们进一步证明,学识渊博的表示形式可以作为隐性场景,从而实现了视频处理任务,例如图像超级分辨率和否决,而无需任何其他监督。
We present a method, Neural Radiance Flow (NeRFlow),to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images. Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene. By enforcing consistency across different modalities, our representation enables multi-view rendering in diverse dynamic scenes, including water pouring, robotic interaction, and real images, outperforming state-of-the-art methods for spatial-temporal view synthesis. Our approach works even when inputs images are captured with only one camera. We further demonstrate that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.