论文标题

流动:一切都是归一流的

Flowification: Everything is a Normalizing Flow

论文作者

Máté, Bálint, Klein, Samuel, Golling, Tobias, Fleuret, François

论文摘要

归一流流的两个关键特征是它是可逆的(尤其是维护维度),并且它可以监视其更改数据点的可能性,因为随着样品的传播,数据点的可能性是沿网络传播的。最近,引入了归一化流的多种概括,以放松这两个条件。另一方面,神经网络仅在输入上执行前向通行证,既没有神经网络倒数的概念,也没有其可能性贡献之一。在本文中,我们认为某些神经网络架构可以用随机逆通富集,并且可以以它们属于上述正常流量的普遍概念的方式来监测它们的可能性贡献。我们称这种富集流化。我们证明,只有包含线性层,卷积层和可逆激活(例如LeakyRelu)的神经网络才能在图像数据集中的生成设置中进行流动并评估它们。

The two key characteristics of a normalizing flow is that it is invertible (in particular, dimension preserving) and that it monitors the amount by which it changes the likelihood of data points as samples are propagated along the network. Recently, multiple generalizations of normalizing flows have been introduced that relax these two conditions. On the other hand, neural networks only perform a forward pass on the input, there is neither a notion of an inverse of a neural network nor is there one of its likelihood contribution. In this paper we argue that certain neural network architectures can be enriched with a stochastic inverse pass and that their likelihood contribution can be monitored in a way that they fall under the generalized notion of a normalizing flow mentioned above. We term this enrichment flowification. We prove that neural networks only containing linear layers, convolutional layers and invertible activations such as LeakyReLU can be flowified and evaluate them in the generative setting on image datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源