论文标题

从单个示例图像通过深图形图像进行实时虚拟试验,并学到了可区分的渲染器

Real-time Virtual-Try-On from a Single Example Image through Deep Inverse Graphics and Learned Differentiable Renderers

论文作者

Kips, Robin, Jiang, Ruowei, Ba, Sileye, Duke, Brendan, Perrot, Matthieu, Gori, Pietro, Bloch, Isabelle

论文摘要

增强现实应用程序已迅速传播到在线平台上,使消费者几乎可以尝试各种产品,例如化妆,脱发或鞋子。但是,将渲染器参数化以综合给定产品的现实图像仍然是一项具有挑战性的任务,需要专家知识。虽然最近的工作从示例图像引入了虚拟试验的神经渲染方法,但当前方法是基于在移动设备上实时实时使用的大生成模型。这需要一种混合方法,结合了计算机图形和神经渲染方法的优势。在本文中,我们提出了一个基于深度学习的新颖框架,以构建实时的反图形编码器,该编码器学会将单个示例图像映射到给定增强现实渲染引擎的参数空间中。我们的方法利用自我监督的学习,不需要标记的培训数据,这可以扩展到许多虚拟的尝试应用程序。此外,由于算法选择或实现限制,大多数增强现实渲染器在实践中无法差异,以实现便携式设备的实时。为了放大对基于图形的可区分渲染器在反图形问题中的需求,我们引入了可训练的模仿模块。我们的模仿者是一个生成网络,可以学会准确地重现给定的非差异渲染器的行为。我们提出了一种新颖的渲染灵敏度损失来训练模仿者,这确保网络能够学习每个渲染参数的准确而连续的表示。我们的框架实现了新颖的应用程序,消费者可以从社交媒体上的鼓舞人心参考图像中实际上尝试一种新颖的未知产品。图形艺术家也可以使用它来自动从参考产品图像中创建逼真的渲染。

Augmented reality applications have rapidly spread across online platforms, allowing consumers to virtually try-on a variety of products, such as makeup, hair dying, or shoes. However, parametrizing a renderer to synthesize realistic images of a given product remains a challenging task that requires expert knowledge. While recent work has introduced neural rendering methods for virtual try-on from example images, current approaches are based on large generative models that cannot be used in real-time on mobile devices. This calls for a hybrid method that combines the advantages of computer graphics and neural rendering approaches. In this paper we propose a novel framework based on deep learning to build a real-time inverse graphics encoder that learns to map a single example image into the parameter space of a given augmented reality rendering engine. Our method leverages self-supervised learning and does not require labeled training data which makes it extendable to many virtual try-on applications. Furthermore, most augmented reality renderers are not differentiable in practice due to algorithmic choices or implementation constraints to reach real-time on portable devices. To relax the need for a graphics-based differentiable renderer in inverse graphics problems, we introduce a trainable imitator module. Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable renderer. We propose a novel rendering sensitivity loss to train the imitator, which ensures that the network learns an accurate and continuous representation for each rendering parameter. Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image on social media. It can also be used by graphics artists to automatically create realistic rendering from a reference product image.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源