论文标题
自动编码变量自动编码器
Autoencoding Variational Autoencoder
论文作者
论文摘要
差异自动编码器(VAE)是否始终从其解码器产生典型样品?本文表明,这个问题的令人惊讶的答案是“否”。 (由名义训练的)VAE不一定会摊销其能够产生的典型样品的推断。我们研究了这种行为对学习的表示形式的含义,以及通过引入自我一致性的概念来修复它的后果。我们的方法取决于变异近似分布的替代构造,即具有马尔可夫链在编码器和解码器之间交替的扩展VAE模型的真实后部。该方法可用于从头开始训练VAE模型,也可以对已经训练有训练的VAE进行训练,它可以以完全自我监督的方式作为后处理步骤运行,而无需访问原始培训数据。我们的实验分析表明,接受我们自洽方法训练的编码器导致表征强大(不敏感)对对抗性攻击引入的输入中的扰动。我们提供有关colormnist和celeba基准数据集的实验结果,这些数据集量化了学习表示的特性,并将方法与专门针对所需属性训练的基线进行比较。
Does a Variational AutoEncoder (VAE) consistently encode typical samples generated from its decoder? This paper shows that the perhaps surprising answer to this question is `No'; a (nominally trained) VAE does not necessarily amortize inference for typical samples that it is capable of generating. We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency. Our approach hinges on an alternative construction of the variational approximation distribution to the true posterior of an extended VAE model with a Markov chain alternating between the encoder and the decoder. The method can be used to train a VAE model from scratch or given an already trained VAE, it can be run as a post processing step in an entirely self supervised way without access to the original training data. Our experimental analysis reveals that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks. We provide experimental results on the ColorMnist and CelebA benchmark datasets that quantify the properties of the learned representations and compare the approach with a baseline that is specifically trained for the desired property.