论文标题
由物理学或稀疏观测告知的深神经操作员的可靠外推
Reliable extrapolation of deep neural operators informed by physics or sparse observations
论文作者
论文摘要
深层神经操作员可以通过深神经网络在无限维功能空间之间学习非线性映射。作为实时预测的部分微分方程(PDE)的有前途的替代求解器,深层神经操作员(例如深层操作员网络(DeepOnets))提供了科学和工程学的新模拟范式。通常,纯数据驱动的神经操作员和深度学习模型通常仅限于插值场景,在训练集的支持下,新的预测利用了输入。但是,在现实世界应用的推理阶段,输入可能位于支持之外,即需要外推,这可能导致深度学习模型的大错误和不可避免的失败。在这里,我们应对深层神经操作员的外推挑战。首先,我们通过通过两个函数空间之间的2-wasserstein距离来量化外推复杂性,并提出了偏见变化权衡方面的新行为,以相对于模型容量来推外,从而系统地研究了deponets的外推行为。随后,我们开发了一个完整的工作流程,包括外推确定,我们提出了五种可靠的学习方法,这些方法可以通过需要其他信息来保证推断出的安全预测 - 系统的管理PDE或稀疏的新观察结果。所提出的方法基于对预先训练的deponet或多重学习学习的基础。我们证明了提出的框架对各种参数PDE的有效性。我们的系统比较提供了根据可用信息,所需的准确性和所需的推理速度选择适当的外推方法的实用指南。
Deep neural operators can learn nonlinear mappings between infinite-dimensional function spaces via deep neural networks. As promising surrogate solvers of partial differential equations (PDEs) for real-time prediction, deep neural operators such as deep operator networks (DeepONets) provide a new simulation paradigm in science and engineering. Pure data-driven neural operators and deep learning models, in general, are usually limited to interpolation scenarios, where new predictions utilize inputs within the support of the training set. However, in the inference stage of real-world applications, the input may lie outside the support, i.e., extrapolation is required, which may result to large errors and unavoidable failure of deep learning models. Here, we address this challenge of extrapolation for deep neural operators. First, we systematically investigate the extrapolation behavior of DeepONets by quantifying the extrapolation complexity via the 2-Wasserstein distance between two function spaces and propose a new behavior of bias-variance trade-off for extrapolation with respect to model capacity. Subsequently, we develop a complete workflow, including extrapolation determination, and we propose five reliable learning methods that guarantee a safe prediction under extrapolation by requiring additional information -- the governing PDEs of the system or sparse new observations. The proposed methods are based on either fine-tuning a pre-trained DeepONet or multifidelity learning. We demonstrate the effectiveness of the proposed framework for various types of parametric PDEs. Our systematic comparisons provide practical guidelines for selecting a proper extrapolation method depending on the available information, desired accuracy, and required inference speed.