论文标题

关于多元神经网络近似误差界限的清晰度

On Sharpness of Error Bounds for Multivariate Neural Network Approximation

论文作者

Goebbels, Steffen

论文摘要

单个隐藏层馈电神经网络可以代表脊函数之和的多元函数。这些脊功能是通过激活功能和可自定义的权重定义的。本文通过此类脊功能来处理最佳的非线性近似。误差范围以平滑度模量表示。但是,主要重点是证明界限是最好的。为此,反示例是通过统一界原理的非线性定量扩展构建的。对于Lipschitz类,它们显示出逻辑激活函数和某些分段多项式激活函数的清晰度。该论文基于单变量结果(Goebbels,ST。:关于单变量近似的误差范围,由单个隐藏层馈电神经网络。结果数学75(3),2020,第109条,https://rdcu.be/b5mkh)。

Single hidden layer feedforward neural networks can represent multivariate functions that are sums of ridge functions. These ridge functions are defined via an activation function and customizable weights. The paper deals with best non-linear approximation by such sums of ridge functions. Error bounds are presented in terms of moduli of smoothness. The main focus, however, is to prove that the bounds are best possible. To this end, counterexamples are constructed with a non-linear, quantitative extension of the uniform boundedness principle. They show sharpness with respect to Lipschitz classes for the logistic activation function and for certain piecewise polynomial activation functions. The paper is based on univariate results in (Goebbels, St.: On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks. Results Math 75 (3), 2020, article 109, https://rdcu.be/b5mKH).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源