论文标题
局部极端学习机和域分解,用于求解线性和非线性偏微分方程
Local Extreme Learning Machines and Domain Decomposition for Solving Linear and Nonlinear Partial Differential Equations
论文作者
论文摘要
我们提出了一种基于神经网络的方法,用于通过结合极端学习机(ELM),域分解和局部神经网络的思想来求解线性和非线性偏微分方程。每个子域上的现场解决方案由局部进发神经网络表示,并在子域边界上施加$ C^k $连续性。每个局部神经网络都由少数隐藏层组成,而其最后一个隐藏层可以宽。本地神经网络的所有隐藏层中的重量/偏置系数都是预先设置的,并且是固定的,并且只有输出层中的权重系数才是训练参数。总体神经网络是通过线性或非线性最小二乘计算训练的,而不是通过后传播类型算法训练。我们介绍了一个块时间建设方案以及用于长期动态模拟的提出的方法。当前的方法相对于神经网络的自由度表现出明显的收敛感。随着自由度的增加,它的数值错误通常会呈指数级或几乎呈指数减小。已经进行了广泛的数值实验,以证明所提出方法的计算性能。我们根据准确性和计算成本将当前方法与深层Galerkin方法(DGM)和物理信息神经网络(PINN)进行了比较。当前的方法表现出明显的优势,其数值误差和网络训练时间比DGM和PINN小得多(通常按数量级)小得多。我们还将当前方法与经典有限元方法(FEM)进行了比较。当前方法的计算性能与FEM性能相当,并且通常超过了FEM性能。
We present a neural network-based method for solving linear and nonlinear partial differential equations, by combining the ideas of extreme learning machines (ELM), domain decomposition and local neural networks. The field solution on each sub-domain is represented by a local feed-forward neural network, and $C^k$ continuity is imposed on the sub-domain boundaries. Each local neural network consists of a small number of hidden layers, while its last hidden layer can be wide. The weight/bias coefficients in all hidden layers of the local neural networks are pre-set to random values and are fixed, and only the weight coefficients in the output layers are training parameters. The overall neural network is trained by a linear or nonlinear least squares computation, not by the back-propagation type algorithms. We introduce a block time-marching scheme together with the presented method for long-time dynamic simulations. The current method exhibits a clear sense of convergence with respect to the degrees of freedom in the neural network. Its numerical errors typically decrease exponentially or nearly exponentially as the number of degrees of freedom increases. Extensive numerical experiments have been performed to demonstrate the computational performance of the presented method. We compare the current method with the deep Galerkin method (DGM) and the physics-informed neural network (PINN) in terms of the accuracy and computational cost. The current method exhibits a clear superiority, with its numerical errors and network training time considerably smaller (typically by orders of magnitude) than those of DGM and PINN. We also compare the current method with the classical finite element method (FEM). The computational performance of the current method is on par with, and oftentimes exceeds, the FEM performance.