论文标题
Singrav:从单个自然场景中学习一个生成的辐射量
SinGRAV: Learning a Generative Radiance Volume from a Single Natural Scene
论文作者
论文摘要
我们为一般自然场景提供了3D生成模型。缺少表征目标场景的3D数据的必要卷,我们建议从一个场景中学习。我们的主要见解是,自然场景通常包含多个成分,其几何形状,纹理和空间排列遵循一些清晰的模式,但在同一场景中的不同区域中仍然表现出丰富的变化。这表明将生成模型的学习本地化在大量的地方区域上。因此,我们利用具有自然界空间局部偏见的多尺度卷积网络,以从单个场景的多个尺度上从本地区域的统计数据中学习。与现有方法相反,我们的学习设置绕过了从许多同质3D场景中收集数据以学习共同特征的需求。我们创造了我们的方法Singrav,以从单个自然场景中学习生成辐射量。我们展示了Singrav从单个场景中生成合理和多样化的变化的能力,Singrav在最先进的生成神经场景方法中的优点以及Singrav在多种应用中的使用,跨越了3D场景编辑,构图和动画。代码和数据将被发布以促进进一步的研究。
We present a 3D generative model for general natural scenes. Lacking necessary volumes of 3D data characterizing the target scene, we propose to learn from a single scene. Our key insight is that a natural scene often contains multiple constituents whose geometry, texture, and spatial arrangements follow some clear patterns, but still exhibit rich variations over different regions within the same scene. This suggests localizing the learning of a generative model on substantial local regions. Hence, we exploit a multi-scale convolutional network, which possesses the spatial locality bias in nature, to learn from the statistics of local regions at multiple scales within a single scene. In contrast to existing methods, our learning setup bypasses the need to collect data from many homogeneous 3D scenes for learning common features. We coin our method SinGRAV, for learning a Generative RAdiance Volume from a Single natural scene. We demonstrate the ability of SinGRAV in generating plausible and diverse variations from a single scene, the merits of SinGRAV over state-of-the-art generative neural scene methods, as well as the versatility of SinGRAV by its use in a variety of applications, spanning 3D scene editing, composition, and animation. Code and data will be released to facilitate further research.