论文标题
物理集成的机器学习:将神经网络嵌入Navier-Stokes方程中。第二部分
Physics-integrated machine learning: embedding a neural network in the Navier-Stokes equations. Part II
论文作者
论文摘要
这项工作是Iskhakov A.S.的论文的延续。和Dinh N.T. “物理综合的机器学习:将神经网络嵌入到Navier-Stokes方程中”。第I // ARXIV:2008.10509(2020)[1]。进一步研究了[1]物理学集成(或PDE集成(部分微分方程))机器学习(ML)框架中的提议。 Navier-Stokes方程是使用Chorin的投影方法的TensorFlow ML库来求解的。 TensorFlow溶液与深馈出神经网络(DFNN)集成。这种集成使人们可以训练嵌入在Navier-Stokes方程中的DFNN,而无需对DFNN的直接输出的目标(标记培训)数据进行训练;取而代之的是,DFNN在场上变量(感兴趣的量)上进行了训练,这是Navier-Stokes方程(速度和压力场)的解决方案。为了证明该框架的性能,还提出了另外两个案例研究:2D湍流盖驱动的腔体由DFNN预测(a)湍流粘度和(b)雷诺兹强调的衍生物。尽管它具有复杂性和计算成本,但提议的物理融合的ML表明有可能为动荡的模型开发“ PDE集成”的封闭关系,并提供主要优势,即:(i)DFNN的目标输出(标记的培训数据)可能是未知的,并且可能是使用知识库(PDES)恢复的; (ii)不必从大数据中提取和预处理信息(培训目标),而是可以由PDE提取; (iii)无需采用物理或尺度分离假设来建立PDE的闭合模型。优势(i)在第一部分论文[1]中得到了证明,而优势(ii)是当前论文的主题。
The work is a continuation of a paper by Iskhakov A.S. and Dinh N.T. "Physics-integrated machine learning: embedding a neural network in the Navier-Stokes equations". Part I // arXiv:2008.10509 (2020) [1]. The proposed in [1] physics-integrated (or PDE-integrated (partial differential equation)) machine learning (ML) framework is furtherly investigated. The Navier-Stokes equations are solved using the Tensorflow ML library for Python programming language via the Chorin's projection method. The Tensorflow solution is integrated with a deep feedforward neural network (DFNN). Such integration allows one to train a DFNN embedded in the Navier-Stokes equations without having the target (labeled training) data for the direct outputs from the DFNN; instead, the DFNN is trained on the field variables (quantities of interest), which are solutions for the Navier-Stokes equations (velocity and pressure fields). To demonstrate performance of the framework, two additional case studies are formulated: 2D turbulent lid-driven cavities with predicted by a DFNN (a) turbulent viscosity and (b) derivatives of the Reynolds stresses. Despite its complexity and computational cost, the proposed physics-integrated ML shows a potential to develop a "PDE-integrated" closure relations for turbulent models and offers principal advantages, namely: (i) the target outputs (labeled training data) for a DFNN might be unknown and can be recovered using the knowledge base (PDEs); (ii) it is not necessary to extract and preprocess information (training targets) from big data, instead it can be extracted by PDEs; (iii) there is no need to employ a physics- or scale-separation assumptions to build a closure model for PDEs. The advantage (i) is demonstrated in the Part I paper [1], while the advantage (ii) is the subject of the current paper.