论文标题

学习发电机模型的稀疏潜在表示

Learning Sparse Latent Representations for Generator Model

论文作者

Li, Hanao, Han, Tian

论文摘要

稀疏性是理想的属性。与密集模型相比,它可以导致更有效,更有效的表示。同时,由于其复杂性,学习稀疏的潜在表示是计算机视觉和机器学习领域的一个挑战性问题。在本文中,我们提出了一种新的无监督学习方法,以在发电机模型的潜在空间上稀疏,并逐渐稀疏的尖峰和平板分布作为我们的先验。我们的模型仅由一个自上而下的发电机网络组成,该网络将潜在变量映射到观察到的数据。可以使用基于非梯度的方法来推断发电机后方向的潜在变量。推理步骤中的尖峰和平板正则化可以将非信息潜在维度推向零以引起稀疏性。广泛的实验表明,该模型可以保留具有稀疏表示的原始图像中的大多数信息,同时与其他现有方法相比证明了结果的改善。我们观察到,我们的模型可以学习分离的语义,并提高潜在代码的解释性,同时促进分类和转化任务的鲁棒性。

Sparsity is a desirable attribute. It can lead to more efficient and more effective representations compared to the dense model. Meanwhile, learning sparse latent representations has been a challenging problem in the field of computer vision and machine learning due to its complexity. In this paper, we present a new unsupervised learning method to enforce sparsity on the latent space for the generator model with a gradually sparsified spike and slab distribution as our prior. Our model consists of only one top-down generator network that maps the latent variable to the observed data. Latent variables can be inferred following generator posterior direction using non-persistent gradient based method. Spike and Slab regularization in the inference step can push non-informative latent dimensions towards zero to induce sparsity. Extensive experiments show the model can preserve majority of the information from original images with sparse representations while demonstrating improved results compared to other existing methods. We observe that our model can learn disentangled semantics and increase explainability of the latent codes while boosting the robustness in the task of classification and denoising.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源