论文标题
物质gan:使用生成的SVBRDF模型捕获反射率
MaterialGAN: Reflectance Capture using a Generative SVBRDF Model
论文作者
论文摘要
我们解决了从一小部分图像测量值中重建空间变化的BRDF的问题。这是一个根本上约束的问题,以前的工作依赖于使用各种正规化先验或捕获许多图像来产生合理的结果。在这项工作中,我们介绍了基于stylegan2的深层生成卷积网络的MateriateGan,该网络经过训练,可以合成现实的SVBRDF参数图。我们表明,在反向渲染框架中,物料gan可以用作强大的材料:我们在其潜在表示中进行优化,以生成与呈现时捕获图像的外观相匹配的材料图。我们在使用手持手机下从闪光照明下捕获的图像重建SVBRDF的任务来演示此框架。我们的方法成功地生成了合理的材料图,以准确地重现目标图像,并且在评估合成数据和真实数据时,都超过了先前的最新材料捕获方法。此外,我们基于GAN的潜在空间允许高级语义材料编辑操作,例如生成材料变化和材料变形。
We address the problem of reconstructing spatially-varying BRDFs from a small set of image measurements. This is a fundamentally under-constrained problem, and previous work has relied on using various regularization priors or on capturing many images to produce plausible results. In this work, we present MaterialGAN, a deep generative convolutional network based on StyleGAN2, trained to synthesize realistic SVBRDF parameter maps. We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework: we optimize in its latent representation to generate material maps that match the appearance of the captured images when rendered. We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone. Our method succeeds in producing plausible material maps that accurately reproduce the target images, and outperforms previous state-of-the-art material capture methods in evaluations on both synthetic and real data. Furthermore, our GAN-based latent space allows for high-level semantic material editing operations such as generating material variations and material morphing.