论文标题

自我监督神经表达的形状和外观模型

Self-supervised Neural Articulated Shape and Appearance Models

论文作者

Wei, Fangyin, Chabra, Rohan, Ma, Lingni, Lassner, Christoph, Zollhöfer, Michael, Rusinkiewicz, Szymon, Sweeney, Chris, Newcombe, Richard, Slavcheva, Mira

论文摘要

对象类的学习几何形状,运动和外观先验对于解决各种计算机视觉问题的解决方案很重要。虽然大多数方法都集中在静态对象上,但动态对象,尤其是在可控的表达中,却较少探索。我们提出了一种新的方法,用于学习一类铰接物体的几何形状,外观和运动的表示,只有一组颜色图像作为输入。我们的新颖表示以一种自我监督的方式学习了形状,外观和发音代码,从而使这些语义维度独立控制。我们的模型是端到端训练的,而无需任何发音注释。实验表明,我们的方法对不同的关节类型(例如Revolute和Prismatic关节)以及这些关节的不同组合表现良好。与使用直接3D监督并且不输出外观的最新技术相比,我们仅从2D观测中恢复了更忠实的几何形状和外观。此外,我们的表示可以实现多种应用,例如少量重建,新颖的发音的产生和新型的观点合成。

Learning geometry, motion, and appearance priors of object classes is important for the solution of a large variety of computer vision problems. While the majority of approaches has focused on static objects, dynamic objects, especially with controllable articulation, are less explored. We propose a novel approach for learning a representation of the geometry, appearance, and motion of a class of articulated objects given only a set of color images as input. In a self-supervised manner, our novel representation learns shape, appearance, and articulation codes that enable independent control of these semantic dimensions. Our model is trained end-to-end without requiring any articulation annotations. Experiments show that our approach performs well for different joint types, such as revolute and prismatic joints, as well as different combinations of these joints. Compared to state of the art that uses direct 3D supervision and does not output appearance, we recover more faithful geometry and appearance from 2D observations only. In addition, our representation enables a large variety of applications, such as few-shot reconstruction, the generation of novel articulations, and novel view-synthesis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源