论文标题

丽莎:学习隐式形状和手的外观

LISA: Learning Implicit Shape and Appearance of Hands

论文作者

Corona, Enric, Hodan, Tomas, Vo, Minh, Moreno-Noguer, Francesc, Sweeney, Chris, Newcombe, Richard, Ma, Lingni

论文摘要

本文提出了一个人类手的神经模型,名为Lisa。该模型可以捕获准确的手形状和外观,从而推广到任意手术的受试者,提供致密的表面对应关系,并从野外的图像中重建,易于动画。我们通过在用手部骨架的粗3D姿势注释的大量多视图RGB图像序列上最大程度地限制形状和外观损失来训练丽莎。对于手部局部坐标的3D点,我们的模型可以独立地预测相对于每个手骨的颜色和签名距离,然后使用预测的皮肤重量结合每骨预测。形状,颜色和姿势表示通过设计散布,允许估计或仅动画选定的参数。我们通过实验表明,与基线方法相比,丽莎可以准确地从单眼或多视图序列重建动态手,从而达到更高的重建手形质量。项目页面:https://www.iri.upc.edu/people/ecorona/lisa/。

This paper proposes a do-it-all neural model of human hands, named LISA. The model can capture accurate hand shape and appearance, generalize to arbitrary hand subjects, provide dense surface correspondences, be reconstructed from images in the wild and easily animated. We train LISA by minimizing the shape and appearance losses on a large set of multi-view RGB image sequences annotated with coarse 3D poses of the hand skeleton. For a 3D point in the hand local coordinate, our model predicts the color and the signed distance with respect to each hand bone independently, and then combines the per-bone predictions using predicted skinning weights. The shape, color and pose representations are disentangled by design, allowing to estimate or animate only selected parameters. We experimentally demonstrate that LISA can accurately reconstruct a dynamic hand from monocular or multi-view sequences, achieving a noticeably higher quality of reconstructed hand shapes compared to baseline approaches. Project page: https://www.iri.upc.edu/people/ecorona/lisa/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源