论文标题
快速鼻子:铰接神经场的快速变形器
Fast-SNARF: A Fast Deformer for Articulated Neural Fields
论文作者
论文摘要
神经场已彻底改变了3D重建和僵化场景的新观点的综合。制作适用于铰接物体(例如人体)的这种方法的关键挑战是建模其余姿势(规范空间)和变形空间之间的3D位置的变形。我们提出了一个针对神经场的新发音模块,即Fast-Snarf,该模块通过迭代根发现找到了规范空间和姿势空间之间的准确对应关系。 Fast-Snarf是对我们以前的工作SNARF功能的替代方法,同时显着提高了其计算效率。我们对SNARF贡献了几种算法和实施改进,并加快了$ 150 \ times $。这些改进包括基于体素的对应搜索,预先计算线性混合皮肤功能以及使用CUDA内核实现的有效软件。 Fast-Snarf可以有效,同时优化形状和皮肤重量,并具有变形的观测值,而无需对应关系(例如3D网格)。因为学习变形图是许多3D人类化方法中的关键组成部分,并且由于快速鼻子提供了一种计算有效的解决方案,因此我们认为,这项工作是迈向3D虚拟人类实用的重要一步。
Neural fields have revolutionized the area of 3D reconstruction and novel view synthesis of rigid scenes. A key challenge in making such methods applicable to articulated objects, such as the human body, is to model the deformation of 3D locations between the rest pose (a canonical space) and the deformed space. We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space via iterative root finding. Fast-SNARF is a drop-in replacement in functionality to our previous work, SNARF, while significantly improving its computational efficiency. We contribute several algorithmic and implementation improvements over SNARF, yielding a speed-up of $150\times$. These improvements include voxel-based correspondence search, pre-computing the linear blend skinning function, and an efficient software implementation with CUDA kernels. Fast-SNARF enables efficient and simultaneous optimization of shape and skinning weights given deformed observations without correspondences (e.g. 3D meshes). Because learning of deformation maps is a crucial component in many 3D human avatar methods and since Fast-SNARF provides a computationally efficient solution, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.