论文标题

局部随机神经网络,具有不连续的Galerkin方法的部分微分方程

Local Randomized Neural Networks with Discontinuous Galerkin Methods for Partial Differential Equations

论文作者

Sun, Jingbo, Dong, Suchuan, Wang, Fei

论文摘要

随机神经网络(RNN)是神经网络的变体,其中隐藏层参数固定为随机分配的值,并且通过最小二乘求解线性系统来获得输出层参数。这可以提高效率,而不会降低神经网络的准确性。在本文中,我们结合了局部RNN(LRNN)的概念和不连续的Galerkin(DG)方法来求解偏微分方程。 RNN用于近似子域上的溶液,并使用DG公式将它们粘合在一起。以泊松问题为模型,我们提出了三种数值方案并提供收敛分析。然后,我们将思想扩展到时间依赖的问题。以热方程为模型,提出了三个带有DG制剂的时空LRNN。最后,我们提出数值测试,以证明本文开发的方法的性能。我们将提出的方法与有限元方法和通常的DG方法进行了比较。 LRNN-DG方法可以在相同的自由度下实现更好的准确性,这表明这种新方法具有解决部分微分方程的巨大潜力。

Randomized neural networks (RNN) are a variation of neural networks in which the hidden-layer parameters are fixed to randomly assigned values and the output-layer parameters are obtained by solving a linear system by least squares. This improves the efficiency without degrading the accuracy of the neural network. In this paper, we combine the idea of the local RNN (LRNN) and the discontinuous Galerkin (DG) approach for solving partial differential equations. RNNs are used to approximate the solution on the subdomains, and the DG formulation is used to glue them together. Taking the Poisson problem as a model, we propose three numerical schemes and provide the convergence analyses. Then we extend the ideas to time-dependent problems. Taking the heat equation as a model, three space-time LRNN with DG formulations are proposed. Finally, we present numerical tests to demonstrate the performance of the methods developed herein. We compare the proposed methods with the finite element method and the usual DG method. The LRNN-DG methods can achieve better accuracy under the same degrees of freedom, signifying that this new approach has a great potential for solving partial differential equations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源