论文标题

PDE登记学习的误差范围

Error bounds for PDE-regularized learning

论文作者

Gräser, Carsten, Srinivasan, Prem Anand Alathur

论文摘要

在这项工作中,我们考虑通过部分微分方程(PDE)对监督学习问题的正则化,并根据PDE错误项和数据错误项得出所获得的近似值的错误界限。假设目标函数满足未知的PDE,则PDE误差项量化了该PDE的辅助PDE近似程度。结果表明,如果提供了更多数据,则此错误项会降低。数据错误项量化给定数据的准确性。此外,通过广义的Galerkin离散化解决了无限尺寸函数空间的子集的相关最小化问题,这不一定是子空间。对于此类离散化,根据PDE错误,数据误差和最佳近似误差的限制。

In this work we consider the regularization of a supervised learning problem by partial differential equations (PDEs) and derive error bounds for the obtained approximation in terms of a PDE error term and a data error term. Assuming that the target function satisfies an unknown PDE, the PDE error term quantifies how well this PDE is approximated by the auxiliary PDE used for regularization. It is shown that this error term decreases if more data is provided. The data error term quantifies the accuracy of the given data. Furthermore, the PDE-regularized learning problem is discretized by generalized Galerkin discretizations solving the associated minimization problem in subsets of the infinite dimensional functions space, which are not necessarily subspaces. For such discretizations an error bound in terms of the PDE error, the data error, and a best approximation error is derived.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源