论文标题

在规范视图中生成和编辑自己的角色

Generate and Edit Your Own Character in a Canonical View

论文作者

Kwak, Jeong-gi, Li, Yuanming, Yoon, Dongsik, Han, David, Ko, Hanseok

论文摘要

最近,从单个用户造人的肖像中综合个性化角色已引起了极大的关注,因为社交媒体和元媒体的急剧普及。输入图像并不总是在正面视图中,因此对于3D建模或其他应用程序,获取或预测规范视图很重要。尽管生成模型的进度可以使肖像的风格化,但在规范视图中获得风格化的图像仍然是一项艰巨的任务。关于面部额叶的几项研究,但是当不在真实图像域中的输入(例如卡通或绘画)中,它们的性能会显着降低。额化​​后的样式化也导致退化的输出。在本文中,我们提出了一个新颖而统一的框架,该框架在规范视图中产生了风格化的肖像。借助提出的潜在映射器,我们分析和发现在Stylegan潜在空间中的额叶化映射,以立即进行风格化和正面化。此外,我们的模型可以使用未标记的2D图像集培训,而无需任何3D监督。实验结果证明了我们方法的有效性。

Recently, synthesizing personalized characters from a single user-given portrait has received remarkable attention as a drastic popularization of social media and the metaverse. The input image is not always in frontal view, thus it is important to acquire or predict canonical view for 3D modeling or other applications. Although the progress of generative models enables the stylization of a portrait, obtaining the stylized image in canonical view is still a challenging task. There have been several studies on face frontalization but their performance significantly decreases when input is not in the real image domain, e.g., cartoon or painting. Stylizing after frontalization also results in degenerated output. In this paper, we propose a novel and unified framework which generates stylized portraits in canonical view. With a proposed latent mapper, we analyze and discover frontalization mapping in a latent space of StyleGAN to stylize and frontalize at once. In addition, our model can be trained with unlabelled 2D image sets, without any 3D supervision. The effectiveness of our method is demonstrated by experimental results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源