论文标题

优化层次图像VAE的样品质量

Optimizing Hierarchical Image VAEs for Sample Quality

论文作者

Luhman, Eric, Luhman, Troy

论文摘要

尽管分层变分自动编码器(VAE)在图像建模任务上实现了高度密度的估计,但与具有相似对数类似的模型的模型相比,其先前的样本看起来往往不那么令人信服。我们将其归因于学到的表示形式,这些表示会过度强调图像的不可察觉细节。为了解决这个问题,我们引入了一种KL-牵引策略,以控制每个潜在组中的信息量,并采用高斯产出层来减少学习目标中的清晰度。为了使图像多样性以忠诚度进行权衡,我们还为层次结构VAE提供了无分类器指导策略。我们在实验中证明了这些技术的有效性。代码可从https://github.com/tcl9876/visual-vae获得。

While hierarchical variational autoencoders (VAEs) have achieved great density estimation on image modeling tasks, samples from their prior tend to look less convincing than models with similar log-likelihood. We attribute this to learned representations that over-emphasize compressing imperceptible details of the image. To address this, we introduce a KL-reweighting strategy to control the amount of infor mation in each latent group, and employ a Gaussian output layer to reduce sharpness in the learning objective. To trade off image diversity for fidelity, we additionally introduce a classifier-free guidance strategy for hierarchical VAEs. We demonstrate the effectiveness of these techniques in our experiments. Code is available at https://github.com/tcl9876/visual-vae.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源