论文标题

AUV网络:学习对齐的紫外线图,用于纹理转移和合成

AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis

论文作者

Chen, Zhiqin, Yin, Kangxue, Fidler, Sanja

论文摘要

在本文中,我们解决了3D形状的纹理表示问题,用于具有挑战性且毫无疑问的纹理转移和综合任务。以前的作品要么应用球形纹理图,这可能会导致大扭曲,要么使用连续的纹理场,从而产生缺乏细节的光滑输出。我们认为,通过紫外线映射将图像表示纹理并将其链接到3D网格的传统方式更加可取,因为合成2D图像是一个充分研究的问题。我们建议通过将不同3D形状的相应语义部分映射到UV空间中的同一位置,将AUV网络学会嵌入2D对齐的紫外线空间中。结果,纹理跨对象对齐,因此可以通过图像的生成模型轻松合成。通过一个简单而有效的纹理对齐模块以无监督的方式学习纹理对齐方式,从线性子空间学习的传统作品中汲取灵感。学习的紫外线映射和对齐纹理表示形式可以实现各种应用,包括纹理转移,纹理合成和纹理单视3D重建。我们在多个数据集上进行实验,以证明我们方法的有效性。项目页面:https://nv-tlabs.github.io/auv-net。

In this paper, we address the problem of texture representation for 3D shapes for the challenging and underexplored tasks of texture transfer and synthesis. Previous works either apply spherical texture maps which may lead to large distortions, or use continuous texture fields that yield smooth outputs lacking details. We argue that the traditional way of representing textures with images and linking them to a 3D mesh via UV mapping is more desirable, since synthesizing 2D images is a well-studied problem. We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space, by mapping the corresponding semantic parts of different 3D shapes to the same location in the UV space. As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images. Texture alignment is learned in an unsupervised manner by a simple yet effective texture alignment module, taking inspiration from traditional works on linear subspace learning. The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction. We conduct experiments on multiple datasets to demonstrate the effectiveness of our method. Project page: https://nv-tlabs.github.io/AUV-NET.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源