论文标题

TNT:视觉变压器用于湍流模拟

TNT: Vision Transformer for Turbulence Simulations

论文作者

Dang, Yuchen, Hu, Zheyuan, Cranmer, Miles, Eickenberg, Michael, Ho, Shirley

论文摘要

据称,由于其多尺度性质和对小扰动的敏感性,湍流很难建模。湍流模拟的经典求解器通常在更细的网格上运行,并且计算效率低下。在本文中,我们提出了基于变压器架构的学识渊博的模拟器的湍流神经变压器(TNT),以预测在粗网格上的湍流动力学。 TNT将香草变压器的位置嵌入到时空设置中,以学习3D时间序列域中的表示形式,并应用时间相互自我注意力(TMSA),该临时自我注意(TMSA)捕获相邻的依赖关系,以提取深层和动态的特征。 TNT能够稳定而准确地生成相对较长的预测,我们表明TNT在几个指标上的最新U-NET模拟器都优于最新的U-NET模拟器。我们还以不同的组件进行了测试模型性能,并评估了不同初始条件的鲁棒性。尽管需要更多的实验,但我们得出的结论是,TNT具有胜过现有求解器并推广到其他仿真数据集的巨大潜力。

Turbulence is notoriously difficult to model due to its multi-scale nature and sensitivity to small perturbations. Classical solvers of turbulence simulation generally operate on finer grids and are computationally inefficient. In this paper, we propose the Turbulence Neural Transformer (TNT), which is a learned simulator based on the transformer architecture, to predict turbulent dynamics on coarsened grids. TNT extends the positional embeddings of vanilla transformers to a spatiotemporal setting to learn the representation in the 3D time-series domain, and applies Temporal Mutual Self-Attention (TMSA), which captures adjacent dependencies, to extract deep and dynamic features. TNT is capable of generating comparatively long-range predictions stably and accurately, and we show that TNT outperforms the state-of-the-art U-net simulator on several metrics. We also test the model performance with different components removed and evaluate robustness to different initial conditions. Although more experiments are needed, we conclude that TNT has great potential to outperform existing solvers and generalize to additional simulation datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源