论文标题

用于解决部分微分方程的基于准蒙特卡洛的深度学习算法的收敛分析

Convergence analysis of a quasi-Monte Carlo-based deep learning algorithm for solving partial differential equations

论文作者

Fu, Fengjiang, Wang, Xiaoqun

论文摘要

深度学习方法在解决部分微分方程(PDE)方面取得了巨大的成功,在该方程式(PDE)中,损失通常被定义为不可或缺的。这些算法的准确性和效率在很大程度上取决于正交方法。我们建议将准蒙特卡洛(QMC)方法应用于Deep Ritz方法(DRM),以解决泊松方程和静态Schrödinger方程的Neumann问题。为了进行误差估计,我们分解了使用深度学习算法将PDE求解到概括误差,近似误差和训练误差的误差。我们建立了上限,并证明基于QMC的DRM比DRM达到了渐近的误差。数值实验表明,在所有情况下,所提出的方法的收敛速度更快,基于QMC的随机DRM的梯度估计器的方差比DRM的梯度估计量小得多,这表明了QMC在深度学习中的优越性。

Deep learning methods have achieved great success in solving partial differential equations (PDEs), where the loss is often defined as an integral. The accuracy and efficiency of these algorithms depend greatly on the quadrature method. We propose to apply quasi-Monte Carlo (QMC) methods to the Deep Ritz Method (DRM) for solving the Neumann problems for the Poisson equation and the static Schrödinger equation. For error estimation, we decompose the error of using the deep learning algorithm to solve PDEs into the generalization error, the approximation error and the training error. We establish the upper bounds and prove that QMC-based DRM achieves an asymptotically smaller error bound than DRM. Numerical experiments show that the proposed method converges faster in all cases and the variances of the gradient estimators of randomized QMC-based DRM are much smaller than those of DRM, which illustrates the superiority of QMC in deep learning over MC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源