论文标题
Styledomain:用于一声和少量域适应的StyleGAN的高效且轻巧的参数
StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation
论文作者
论文摘要
gan的域适应性是在大型数据集(例如stylegan)上仔细调整GAN模型到具有很少样品的特定域(例如绘画面,草图等)的一个问题。尽管有许多方法以不同的方式解决了这个问题,但仍然有许多重要的问题尚未解决。在本文中,我们对gan的域适应问题进行了系统的深入分析,重点是stylegan模型。我们对StyleGAN最重要的部分进行了详细的探索,这些探索是根据源域和目标域之间的相似性来调整发电机为新域的。这项研究的结果是,我们提出了针对域适应的样式的新的高效和轻量级参数。特别是,我们表明,在样式空间(Styledomain方向)中存在足以适应类似域的方向。对于不同的域,我们提出了仿射+和affinight+参数化,这使我们能够以几次适应性的方式优于现有基线,同时具有较少的训练参数。最后,我们检查了样式域的方向,并发现了它们用于域混合和跨域图像变形的许多令人惊讶的特性。可以在https://github.com/airi-institute/styledomain上找到源代码。
Domain adaptation of GANs is a problem of fine-tuning GAN models pretrained on a large dataset (e.g. StyleGAN) to a specific domain with few samples (e.g. painting faces, sketches, etc.). While there are many methods that tackle this problem in different ways, there are still many important questions that remain unanswered. In this paper, we provide a systematic and in-depth analysis of the domain adaptation problem of GANs, focusing on the StyleGAN model. We perform a detailed exploration of the most important parts of StyleGAN that are responsible for adapting the generator to a new domain depending on the similarity between the source and target domains. As a result of this study, we propose new efficient and lightweight parameterizations of StyleGAN for domain adaptation. Particularly, we show that there exist directions in StyleSpace (StyleDomain directions) that are sufficient for adapting to similar domains. For dissimilar domains, we propose Affine+ and AffineLight+ parameterizations that allows us to outperform existing baselines in few-shot adaptation while having significantly less training parameters. Finally, we examine StyleDomain directions and discover their many surprising properties that we apply for domain mixing and cross-domain image morphing. Source code can be found at https://github.com/AIRI-Institute/StyleDomain.