论文标题

SketchSampler:基于素描的3D重建通过视图深度采样

SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth Sampling

论文作者

Gao, Chenjian, Yu, Qian, Sheng, Lu, Song, Yi-Zhe, Xu, Dong

论文摘要

基于单个草图图像重建3D形状是由于稀疏,不规则的草图和常规,密集的3D形状之间的较大域间隙而具有挑战性的。现有的作品尝试采用从草图提取的全局功能来直接预测3D坐标,但通常会遭受失去对输入草图不忠于的细节。通过分析3D到2D投影过程,我们注意到表征2D点云分布的密度图(即,在投影平面的每个位置投射的点的概率)可以用作促进重建过程的代理。为此,我们首先通过图像翻译网络将草图翻译成一个更有信息的2D表示形式,该表示可用于生成密度映射。接下来,通过两个阶段的概率采样过程重建一个3D点云:首先通过对密度映射进行采样,首先恢复2D点(即X和Y坐标);然后通过在每个2D点确定的射线处采样深度值来预测深度​​(即Z坐标)。进行了广泛的实验,定量和定性结果都表明,我们提出的方法显着超过其他基线方法。

Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape. Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch. Through analyzing the 3D-to-2D projection process, we notice that the density map that characterizes the distribution of 2D point clouds (i.e., the probability of points projected at each location of the projection plane) can be used as a proxy to facilitate the reconstruction process. To this end, we first translate a sketch via an image translation network to a more informative 2D representation that can be used to generate a density map. Next, a 3D point cloud is reconstructed via a two-stage probabilistic sampling process: first recovering the 2D points (i.e., the x and y coordinates) by sampling the density map; and then predicting the depth (i.e., the z coordinate) by sampling the depth values at the ray determined by each 2D point. Extensive experiments are conducted, and both quantitative and qualitative results show that our proposed approach significantly outperforms other baseline methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源