论文标题

通过增强的Lagrangian松弛法(Al-Pinns)增强物理信息的神经网络

Enhanced Physics-Informed Neural Networks with Augmented Lagrangian Relaxation Method (AL-PINNs)

论文作者

Son, Hwijae, Cho, Sung Woong, Hwang, Hyung Ju

论文摘要

物理知识的神经网络(PINN)已成为科学计算中深度学习的重要应用,因为它们是非线性偏微分方程(PDES)解决方案的强大近似值。通过调整损耗函数的每个组件的重量(称为自适应损失平衡算法),已经进行了许多尝试来促进PINN的训练过程。在本文中,我们提出了针对Pinns(Al-Pinns)的增强拉格朗日放松方法。我们将初始条件和边界条件视为PDE残差优化问题的约束。通过采用增强的拉格朗日放松,受约束的优化问题成为一个顺序的最大值问题,因此可学习的参数$λ$适应每个损失组件。我们的理论分析表明,所提出的损失函数的最小化序列会收敛于Helmholtz,粘性汉堡和Klein-Gordon方程的实际解决方案。我们通过各种数值实验证明,与最先进的自适应损失平衡算法相比,Al-Pinn的相对误差要小得多。

Physics-Informed Neural Networks (PINNs) have become a prominent application of deep learning in scientific computation, as they are powerful approximators of solutions to nonlinear partial differential equations (PDEs). There have been numerous attempts to facilitate the training process of PINNs by adjusting the weight of each component of the loss function, called adaptive loss-balancing algorithms. In this paper, we propose an Augmented Lagrangian relaxation method for PINNs (AL-PINNs). We treat the initial and boundary conditions as constraints for the optimization problem of the PDE residual. By employing Augmented Lagrangian relaxation, the constrained optimization problem becomes a sequential max-min problem so that the learnable parameters $λ$ adaptively balance each loss component. Our theoretical analysis reveals that the sequence of minimizers of the proposed loss functions converges to an actual solution for the Helmholtz, viscous Burgers, and Klein--Gordon equations. We demonstrate through various numerical experiments that AL-PINNs yield a much smaller relative error compared with that of state-of-the-art adaptive loss-balancing algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源