论文标题
竞争性物理知情网络
Competitive Physics Informed Networks
论文作者
论文摘要
可以通过使用PDE残差作为损耗函数来训练神经网络以求解部分微分方程(PDE)。该策略称为“物理知识的神经网络”(PINNS),但目前无法产生高准确的解决方案,通常达到$ 0.1 \%$ $相对错误。我们提出了一种克服这种限制的对抗方法,我们称之为竞争性的Pinns(CPINNS)。 Cpinns训练一个因预测Pinn犯的错误而获得奖励的歧视者。鉴别器和Pinn以确切的PDE解决方案作为最佳策略参加了零和游戏。这种方法避免了PDE离散量的较大条件数量,这可能是即使在良性问题上也是以前尝试减少Pinn错误的失败的原因。泊松问题上的数值实验表明,CPINN的误差要比表现最好的PINN小四个数量级。我们观察到单位准确性顺序的相对误差,每个时期都始终降低。据作者所知,这是第一次达到准确性和收敛行为。关于非线性Schrödinger,Burgers和Allen-Cahn方程的其他实验表明,CPINN的好处不仅限于线性问题。
Neural networks can be trained to solve partial differential equations (PDEs) by using the PDE residual as the loss function. This strategy is called "physics-informed neural networks" (PINNs), but it currently cannot produce high-accuracy solutions, typically attaining about $0.1\%$ relative error. We present an adversarial approach that overcomes this limitation, which we call competitive PINNs (CPINNs). CPINNs train a discriminator that is rewarded for predicting mistakes the PINN makes. The discriminator and PINN participate in a zero-sum game with the exact PDE solution as an optimal strategy. This approach avoids squaring the large condition numbers of PDE discretizations, which is the likely reason for failures of previous attempts to decrease PINN errors even on benign problems. Numerical experiments on a Poisson problem show that CPINNs achieve errors four orders of magnitude smaller than the best-performing PINN. We observe relative errors on the order of single-precision accuracy, consistently decreasing with each epoch. To the authors' knowledge, this is the first time this level of accuracy and convergence behavior has been achieved. Additional experiments on the nonlinear Schrödinger, Burgers', and Allen-Cahn equation show that the benefits of CPINNs are not limited to linear problems.