论文标题
Rignerf:完全可控的神经3D肖像
RigNeRF: Fully Controllable Neural 3D Portraits
论文作者
论文摘要
体积神经渲染方法,例如神经辐射场(NERFS),已使照片真实的新型视图合成。但是,以其标准形式,NERF不支持场景中对象的编辑,例如人头。在这项工作中,我们提出了Rignerf,该系统不仅仅是新颖的视图综合,并可以完全控制从单个肖像视频中学到的头姿势和面部表情。我们使用由3D可变形面模型(3DMM)指导的变形场对头姿势和面部表情的变化进行建模。 3DMM有效地充当了Rignerf的先验,该rignerf学会仅预测3DMM变形的残留物,并使我们能够在输入序列中渲染新颖的(刚性)姿势和(非刚性)表达式。我们仅使用智能手机捕获的简短视频进行培训,我们证明了我们方法在自由视图合成肖像场景中具有明确的头部姿势和表达控制的有效性。项目页面可以在此处找到:http://shahrukhathar.github.io/2022/06/06/rignerf.html
Volumetric neural rendering methods, such as neural radiance fields (NeRFs), have enabled photo-realistic novel view synthesis. However, in their standard form, NeRFs do not support the editing of objects, such as a human head, within a scene. In this work, we propose RigNeRF, a system that goes beyond just novel view synthesis and enables full control of head pose and facial expressions learned from a single portrait video. We model changes in head pose and facial expressions using a deformation field that is guided by a 3D morphable face model (3DMM). The 3DMM effectively acts as a prior for RigNeRF that learns to predict only residuals to the 3DMM deformations and allows us to render novel (rigid) poses and (non-rigid) expressions that were not present in the input sequence. Using only a smartphone-captured short video of a subject for training, we demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls. The project page can be found here: http://shahrukhathar.github.io/2022/06/06/RigNeRF.html