论文标题

从多个视图中进行3D形状重建的分隔ET EMPERA方法

A Divide et Impera Approach for 3D Shape Reconstruction from Multiple Views

论文作者

Spezialetti, Riccardo, Tan, David Joseph, Tonioni, Alessio, Tateno, Keisuke, Tombari, Federico

论文摘要

由于最近由深度学习提供的突破,从单个或多个图像估算物体的3D形状已获得流行。大多数方法都以规范的姿势回归了完整的物体形状,可能会根据学识渊博的先验来推断被遮挡的零件。但是,他们的观点不变技术通常会丢弃从输入图像中可见的独特结构。相比之下,本文提议通过合并给定视图的可见信息来依靠观点变体重建。我们的方法分为三个步骤。从对象的稀疏视图开始,我们首先通过估计所有对之间的相对姿势将它们对齐为一个共同的坐标系。然后,受到传统体素雕刻的启发,我们在图像及其相对姿势上从轮廓上取出的物体产生了一个占用网格。最后,我们完善了初始重建,以构建一个干净的3D模型,该模型从每个角度保留了细节。为了验证提出的方法,我们就相对姿势估计和3D形状重建方面对Shapenet参考基准进行了全面评估。

Estimating the 3D shape of an object from a single or multiple images has gained popularity thanks to the recent breakthroughs powered by deep learning. Most approaches regress the full object shape in a canonical pose, possibly extrapolating the occluded parts based on the learned priors. However, their viewpoint invariant technique often discards the unique structures visible from the input images. In contrast, this paper proposes to rely on viewpoint variant reconstructions by merging the visible information from the given views. Our approach is divided into three steps. Starting from the sparse views of the object, we first align them into a common coordinate system by estimating the relative pose between all the pairs. Then, inspired by the traditional voxel carving, we generate an occupancy grid of the object taken from the silhouette on the images and their relative poses. Finally, we refine the initial reconstruction to build a clean 3D model which preserves the details from each viewpoint. To validate the proposed method, we perform a comprehensive evaluation on the ShapeNet reference benchmark in terms of relative pose estimation and 3D shape reconstruction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源