论文标题
使用上下文感知的分层深度介绍3D摄影
3D Photography using Context-aware Layered Depth Inpainting
论文作者
论文摘要
我们提出了一种将单个RGB-D输入图像转换为3D照片的方法 - 一种用于新型视图合成的多层表示,该表示包含在原始视图中遮挡的区域中包含幻觉的颜色和深度结构。我们将带有显式像素连接性的分层深度图像作为基础表示形式,并提出了一个基于学习的介绍模型,该模型将新的本地颜色和深度内容综合为以空间上下文感知的方式将其合成为遮挡的区域。可以使用标准图形引擎通过运动视差有效地呈现所得的3D照片。我们验证方法在各种挑战性的日常场景中的有效性,与艺术状况相比,造影量更少。
We propose a method for converting a single RGB-D input image into a 3D photo - a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view. We use a Layered Depth Image with explicit pixel connectivity as underlying representation, and present a learning-based inpainting model that synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner. The resulting 3D photos can be efficiently rendered with motion parallax using standard graphics engines. We validate the effectiveness of our method on a wide range of challenging everyday scenes and show fewer artifacts compared with the state of the arts.