论文标题

kolmogorov-arnold代表定理重新审视

The Kolmogorov-Arnold representation theorem revisited

论文作者

Schmidt-Hieber, Johannes

论文摘要

是否可以解释神经网络中多个隐藏层的使用,存在一个长期存在的争论。 Kolmogorov-Arnold表示将多元函数分解为内部和外部功能,因此确实具有与具有两个隐藏层的神经网络相似的结构。但是存在明显的差异。主要障碍之一是外部函数取决于代表的函数,即使表示的函数平滑,也可能会发生巨大变化。我们得出了Kolmogorov-Arnold表示的修改,该表示将代表函数的平滑度传递到外部函数,并且可以通过Relu Network进行很好的近似。看来Kolmogorov-Arnold表示的更自然的解释不是两个隐藏的层,是深层神经网络的解释,其中大多数层都需要近似内部功能。

There is a longstanding debate whether the Kolmogorov-Arnold representation theorem can explain the use of more than one hidden layer in neural networks. The Kolmogorov-Arnold representation decomposes a multivariate function into an interior and an outer function and therefore has indeed a similar structure as a neural network with two hidden layers. But there are distinctive differences. One of the main obstacles is that the outer function depends on the represented function and can be wildly varying even if the represented function is smooth. We derive modifications of the Kolmogorov-Arnold representation that transfer smoothness properties of the represented function to the outer function and can be well approximated by ReLU networks. It appears that instead of two hidden layers, a more natural interpretation of the Kolmogorov-Arnold representation is that of a deep neural network where most of the layers are required to approximate the interior function.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源