论文标题
学习基于物理的面部属性的形成
Learning Formation of Physically-Based Face Attributes
论文作者
论文摘要
基于4000个高分辨率面部扫描的组合数据集,我们引入了一种非线性形态的面部模型,能够生成孔隙水平分辨率的多种面部几何形状,并加上用于物理渲染中的材料属性。我们旨在最大化各种面部身份,同时增加独特组件之间对应关系的鲁棒性,包括中频几何形状,反照率图,镜面强度图和高频位移细节。我们基于深度学习的生成模型学会了将反照率和几何形状相关联,从而确保了生成的资产的解剖学正确性。我们证明了我们的生成模型的潜在用途用于新的身份生成,模型拟合,插值,动画,高富达数据可视化以及低到高分辨率的数据域传输。我们希望这种生成模型的发布能够鼓励所有图形,愿景和以数据为重点的专业人员之间的进一步合作,同时证明每个人的完整生物识别概况的累积价值。
Based on a combined data set of 4000 high resolution facial scans, we introduce a non-linear morphable face model, capable of producing multifarious face geometry of pore-level resolution, coupled with material attributes for use in physically-based rendering. We aim to maximize the variety of face identities, while increasing the robustness of correspondence between unique components, including middle-frequency geometry, albedo maps, specular intensity maps and high-frequency displacement details. Our deep learning based generative model learns to correlate albedo and geometry, which ensures the anatomical correctness of the generated assets. We demonstrate potential use of our generative model for novel identity generation, model fitting, interpolation, animation, high fidelity data visualization, and low-to-high resolution data domain transferring. We hope the release of this generative model will encourage further cooperation between all graphics, vision, and data focused professionals while demonstrating the cumulative value of every individual's complete biometric profile.