论文标题

Neuwigs:用于捕捉头发和动画的神经动态模型

NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and Animation

论文作者

Wang, Ziyan, Nam, Giljoo, Stuyck, Tuur, Lombardi, Stephen, Cao, Chen, Saragih, Jason, Zollhoefer, Michael, Hodgins, Jessica, Lassner, Christoph

论文摘要

人头发的捕获和动画是为虚拟现实创建逼真的化身的两个主要挑战。这两个问题都是高度挑战的,因为头发具有复杂的几何形状和外观,并且表现出具有挑战性的运动。在本文中,我们提出了一种两阶段的方法,该方法以数据驱动的方式独立于头部来建模头发,以应对这些挑战。第一个阶段是状态压缩,通过一种新型的自动编码器AS-A-A-tracker策略,学习了包含运动和外观的3D头发状态的低维潜在空间。为了更好地消除头发和外观学习中的头发,我们使用多视图的头发分割掩模与可区分的体积渲染器结合使用。第二阶段学习了一种新型的头发动力学模型,该模型根据发现的潜在代码执行时间转移。为了在推动我们的动力学模型时执行更高的稳定性,我们从压缩阶段使用3D点云自动编码器来降级态。我们的模型在新颖的视图合成中的表现优于艺术状态,并且能够创建新型的头发动画,而不必依靠头发观察作为驱动信号。项目页面在这里https://ziyanw1.github.io/neuwigs/。

The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality. Both problems are highly challenging, because hair has complex geometry and appearance, as well as exhibits challenging motion. In this paper, we present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner. The first stage, state compression, learns a low-dimensional latent space of 3D hair states containing motion and appearance, via a novel autoencoder-as-a-tracker strategy. To better disentangle the hair and head in appearance learning, we employ multi-view hair segmentation masks in combination with a differentiable volumetric renderer. The second stage learns a novel hair dynamics model that performs temporal hair transfer based on the discovered latent codes. To enforce higher stability while driving our dynamics model, we employ the 3D point-cloud autoencoder from the compression stage for de-noising of the hair state. Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal. Project page is here https://ziyanw1.github.io/neuwigs/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源