论文标题
动态场景的时空视图综合神经场景流场
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes
论文作者
论文摘要
我们提出了一种可以执行动态场景的新型视图和时间综合的方法,仅需要一个单眼视频,其中已知相机作为输入。为此,我们介绍了神经场景流场,这是一种新的表示,将动态场景作为外观,几何形状和3D场景运动的时间变化的连续功能。我们的表示通过神经网络进行了优化,以符合观察到的输入视图。我们表明,我们的表示可以用于复杂的动态场景,包括薄结构,依赖视图的效果和自然运动程度。我们进行了许多实验,这些实验证明我们的方法显着胜过最近的单眼视图合成方法,并在各种现实世界视频上显示了时空视图合成的定性结果。
We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input. To do this, we introduce Neural Scene Flow Fields, a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion. Our representation is optimized through a neural network to fit the observed input views. We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion. We conduct a number of experiments that demonstrate our approach significantly outperforms recent monocular view synthesis methods, and show qualitative results of space-time view synthesis on a variety of real-world videos.