论文标题
学习隐式表面光场
Learning Implicit Surface Light Fields
论文作者
论文摘要
3D对象的隐式表示最近在基于学习的3D重建任务上取得了令人印象深刻的结果。尽管现有作品使用简单的纹理模型来表示对象外观,但照片真实的图像合成需要关于光,几何和表面特性的复杂相互作用的推理。在这项工作中,我们提出了一种新颖的隐式表示,以捕获物体的表面光场的视觉外观。与现有表示形式相反,我们的隐式模型以连续的方式代表表面光场,并且独立于几何形状。此外,我们根据小光源的位置和颜色来调节表面光场。与传统的表面光场模型相比,这使我们可以使用环境图来操纵光源并重新确定对象。我们进一步证明了我们模型的功能,可以从单个真实的RGB图像和相应的3D形状信息中预测看不见对象的视觉外观。正如我们的实验所证明的那样,我们的模型能够推断出丰富的视觉外观,包括阴影和镜面反射。最后,我们表明所提出的表示形式可以嵌入到变分的自动编码器中,以生成符合指定的照明条件的新型外观。
Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.