论文标题

Tinygan:将Biggan蒸馏成有条件的图像生成

TinyGAN: Distilling BigGAN for Conditional Image Generation

论文作者

Chang, Ting-Yun, Lu, Chi-Jen

论文摘要

生成对抗网络(GAN)已成为生成图像建模的强大方法。但是,甘斯因其训练不稳定而臭名昭著,尤其是在大规模复杂的数据集上。尽管Biggan的最新工作显着提高了Imagenet上图像生成的质量,但它需要一个巨大的模型,因此很难在资源受限的设备上部署。为了减少模型尺寸,我们提出了一个黑盒知识蒸馏框架,用于压缩gan,该框架突出了一个稳定而有效的训练过程。鉴于Biggan作为教师网络,我们设法培训了一个较小的学生网络,以模仿其功能,并在Inception上取得了竞争性能和FID得分,而发电机的$ 16 \ times $ $ $。

Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling. However, GANs are notorious for their training instability, especially on large-scale, complex datasets. While the recent work of BigGAN has significantly improved the quality of image generation on ImageNet, it requires a huge model, making it hard to deploy on resource-constrained devices. To reduce the model size, we propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process. Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having $16\times$ fewer parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源