论文标题
复杂姿势下的单视3D身体和布重建
Single-view 3D Body and Cloth Reconstruction under Complex Poses
论文作者
论文摘要
来自单个图像的3D人类形状重建的最新进展已显示出令人印象深刻的结果,并利用深层网络对所谓的隐式函数进行建模,以了解空间中任意密集的3D点的占用状态。但是,尽管如PIFUHD这样的基于此范式的当前算法能够估算人类形状和衣服的准确几何形状,但它们需要高分辨率输入图像,并且无法捕获复杂的身体姿势。大多数训练和评估都是在中性身体姿势下站在相机前的人类的1k分辨率图像进行的。在本文中,我们利用公开可用的数据扩展了现有的基于隐式函数的模型,以处理可以有任意姿势和自锁定肢体的人类图像。我们认为,隐式函数的表示能力不足以同时建模几何和身体姿势的细节。因此,我们提出了一种粗略的方法,在该方法中,我们首先学习一个隐式函数,该函数将输入图像映射到具有较低细节水平的3D身体形状,但尽管它的复杂性很复杂,但它正确地适合了基本的人类姿势。然后,我们学习了一个位移图,该图在平滑表面和输入图像上,该图表编码衣服和身体的高频细节。在实验部分中,我们表明,这种粗到最新的策略代表了形状细节和姿势正确性之间的一个很好的权衡,与最新的最新方法相比,它比较有利。我们的代码将公开可用。
Recent advances in 3D human shape reconstruction from single images have shown impressive results, leveraging on deep networks that model the so-called implicit function to learn the occupancy status of arbitrarily dense 3D points in space. However, while current algorithms based on this paradigm, like PiFuHD, are able to estimate accurate geometry of the human shape and clothes, they require high-resolution input images and are not able to capture complex body poses. Most training and evaluation is performed on 1k-resolution images of humans standing in front of the camera under neutral body poses. In this paper, we leverage publicly available data to extend existing implicit function-based models to deal with images of humans that can have arbitrary poses and self-occluded limbs. We argue that the representation power of the implicit function is not sufficient to simultaneously model details of the geometry and of the body pose. We, therefore, propose a coarse-to-fine approach in which we first learn an implicit function that maps the input image to a 3D body shape with a low level of detail, but which correctly fits the underlying human pose, despite its complexity. We then learn a displacement map, conditioned on the smoothed surface and on the input image, which encodes the high-frequency details of the clothes and body. In the experimental section, we show that this coarse-to-fine strategy represents a very good trade-off between shape detail and pose correctness, comparing favorably to the most recent state-of-the-art approaches. Our code will be made publicly available.