论文标题
残留的定量调整,用于自适应培训物理信息的神经网络
Residual-Quantile Adjustment for Adaptive Training of Physics-informed Neural Network
论文作者
论文摘要
物理信息神经网络(PINN)的自适应训练方法需要专门的构造,以分配每个训练样本分配的权重分布。有效地寻求这种最佳的权重分布并不是一个简单的任务,大多数现有方法选择基于近似于完整分布或最大残差的自适应权重。在本文中,我们表明,用于训练效率的样品自适应选择中的瓶颈是数值残留物的尾巴分布的行为。因此,我们提出了剩余的定量调整(RQA)方法,可为每个训练样本提供更好的体重选择。最初将权重与残差的$ p $ th功率成正比之后,我们的RQA方法将所有权重以$ q $ - Quantile(例如$ 90 \%$(例如$ 90 \%$))重新分配到中间值,因此权重遵循从残差衍生的分位数调整后的分布。另一方面,这种迭代重新加权技术也很容易实施。实验结果表明,所提出的方法在各种偏微分方程(PDE)问题上的表现可以胜过几种自适应方法。
Adaptive training methods for Physics-informed neural network (PINN) require dedicated constructions of the distribution of weights assigned at each training sample. To efficiently seek such an optimal weight distribution is not a simple task and most existing methods choose the adaptive weights based on approximating the full distribution or the maximum of residuals. In this paper, we show that the bottleneck in the adaptive choice of samples for training efficiency is the behavior of the tail distribution of the numerical residual. Thus, we propose the Residual-Quantile Adjustment (RQA) method for a better weight choice for each training sample. After initially setting the weights proportional to the $p$-th power of the residual, our RQA method reassign all weights above $q$-quantile ($90\%$ for example) to the median value, so that the weight follows a quantile-adjusted distribution derived from the residuals. This iterative reweighting technique, on the other hand, is also very easy to implement. Experiment results show that the proposed method can outperform several adaptive methods on various partial differential equation (PDE) problems.