论文标题
学习随机动力学并使用变压器预测新兴行为
Learning stochastic dynamics and predicting emergent behavior using transformers
论文作者
论文摘要
我们表明,最初设计用于语言处理的神经网络可以通过观察系统的单个动态轨迹来学习随机系统的动态规则,并且可以在训练过程中未观察到的条件下准确预测其新兴行为。我们考虑了一个有效物质的晶格模型,该模型正在经历连续的蒙特卡洛动力学,以其稳态构成小的,分散的群集的密度模拟。我们在模型的单个轨迹上训练称为变压器的神经网络。我们显示的变压器有能力代表众多且非本地的动态规则,它得知该模型的动力学由少量流程组成。在训练期间未遇到的密度下,训练有素的变压器的前向轨迹表现出运动性相位的分离,因此可以预测存在非平衡相变的存在。变压器具有从观察中学习动态规则的灵活性,而无需明确的速率或配置空间粗糙的汇总,因此此处使用的过程可以应用于广泛的物理系统,包括具有较大且复杂的动力学生成器的物理系统。
We show that a neural network originally designed for language processing can learn the dynamical rules of a stochastic system by observation of a single dynamical trajectory of the system, and can accurately predict its emergent behavior under conditions not observed during training. We consider a lattice model of active matter undergoing continuous-time Monte Carlo dynamics, simulated at a density at which its steady state comprises small, dispersed clusters. We train a neural network called a transformer on a single trajectory of the model. The transformer, which we show has the capacity to represent dynamical rules that are numerous and nonlocal, learns that the dynamics of this model consists of a small number of processes. Forward-propagated trajectories of the trained transformer, at densities not encountered during training, exhibit motility-induced phase separation and so predict the existence of a nonequilibrium phase transition. Transformers have the flexibility to learn dynamical rules from observation without explicit enumeration of rates or coarse-graining of configuration space, and so the procedure used here can be applied to a wide range of physical systems, including those with large and complex dynamical generators.