论文标题

探测Tryongan

Probing TryOnGAN

论文作者

Kumar, Saurabh, Sinha, Nishant

论文摘要

TryOngan是一种最近的虚拟尝试方法,它会产生高度逼真的图像,并优于以前的大多数方法。在本文中,我们重现了TryOngan实施,并遵循不同的角度进行探测:转移学习的影响,具有潜在空间插值的姿势和特性的调理图像产生的变体。这些方面中的一些从未在文献中探索过。我们发现,转移最初有助于培训,但是随着模型训练更长的训练,并且通过串联姿势调节效果更好。潜在的空间自偏见姿势,样式特征,并使样式转移跨姿势。我们的代码和模型可在开源中获得。

TryOnGAN is a recent virtual try-on approach, which generates highly realistic images and outperforms most previous approaches. In this article, we reproduce the TryOnGAN implementation and probe it along diverse angles: impact of transfer learning, variants of conditioning image generation with poses and properties of latent space interpolation. Some of these facets have never been explored in literature earlier. We find that transfer helps training initially but gains are lost as models train longer and pose conditioning via concatenation performs better. The latent space self-disentangles the pose and the style features and enables style transfer across poses. Our code and models are available in open source.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源