论文标题
学习生成模型的神经编码框架
The Neural Coding Framework for Learning Generative Models
论文作者
论文摘要
神经生成模型可用于从数据中学习复杂的概率分布,从数据中进行采样,并产生概率密度估计。我们提出了一个计算框架,用于开发受大脑预测处理理论启发的神经生成模型。根据预测处理理论,大脑中的神经元形成了一个层次结构,其中一种级别的神经元在另一个级别的感觉输入的期望。这些神经元根据其期望和观察到的信号之间的差异来更新其本地模型。以类似的方式,我们的生成模型中的人造神经元预测了相邻神经元将做什么,并根据预测与现实的匹配程度来调整其参数。在这项工作中,我们表明,在我们的框架中学习的神经生成模型在几个基准数据集和指标上的实践中表现良好,并且与具有相似功能的其他生成模型保持竞争力,或者显着优于具有相似功能的其他生成模型(例如变异自动编码器)。
Neural generative models can be used to learn complex probability distributions from data, to sample from them, and to produce probability density estimates. We propose a computational framework for developing neural generative models inspired by the theory of predictive processing in the brain. According to predictive processing theory, the neurons in the brain form a hierarchy in which neurons in one level form expectations about sensory inputs from another level. These neurons update their local models based on differences between their expectations and the observed signals. In a similar way, artificial neurons in our generative models predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality. In this work, we show that the neural generative models learned within our framework perform well in practice across several benchmark datasets and metrics and either remain competitive with or significantly outperform other generative models with similar functionality (such as the variational auto-encoder).