论文标题
通过神经网络学习单调动力学
Learning Monotone Dynamics by Neural Networks
论文作者
论文摘要
前馈神经网络(FNN)在将人工智能(AI)应用于物理世界中作为标准构建块。它们允许学习未知物理系统(例如生物学和化学)的动力学{预测其未来行为}。但是,他们很可能在没有适当治疗的情况下违反了这些系统的物理限制。这项工作着重于施加两个重要的物理约束:单调性(即,随着时间的推移保留了系统状态的部分顺序)和稳定性(即,在使用FNN来学习物理动力学时,系统状态随时间融合)。对于单调性约束,我们建议使用非负神经网络和批准化。对于单调性和稳定性约束,我们建议同时学习系统动力学和相应的Lyapunov功能。正如案例研究所证明的那样,我们的方法可以保留FNN的稳定性和单调性,并显着减少其预测错误。
Feed-forward neural networks (FNNs) work as standard building blocks in applying artificial intelligence (AI) to the physical world. They allow learning the dynamics of unknown physical systems (e.g., biological and chemical) {to predict their future behavior}. However, they are likely to violate the physical constraints of those systems without proper treatment. This work focuses on imposing two important physical constraints: monotonicity (i.e., a partial order of system states is preserved over time) and stability (i.e., the system states converge over time) when using FNNs to learn physical dynamics. For monotonicity constraints, we propose to use nonnegative neural networks and batch normalization. For both monotonicity and stability constraints, we propose to learn the system dynamics and corresponding Lyapunov function simultaneously. As demonstrated by case studies, our methods can preserve the stability and monotonicity of FNNs and significantly reduce their prediction errors.