论文标题
从嘈杂时间序列估算向量字段
Estimating Vector Fields from Noisy Time Series
论文作者
论文摘要
虽然最近对从时间序列学习微分方程模型的近期兴趣激增,但该领域的方法通常无法应对高度嘈杂的数据。我们将这个问题分为两个部分:(i)近似微分方程的未知向量场(或右侧),以及(ii)处理噪声。为了处理(i),我们描述了一种神经网络结构,该神经网络结构由一维神经形状函数的张量产品组成。对于(ii),我们提出了一种交替的最小化方案,该方案在向量现场训练和过滤步骤之间切换,以及多个训练数据轨迹。我们发现,神经形函数架构保留了密集神经网络的近似属性,可以有效地计算向量场误差,并允许图形解释性,所有这些都适用于任何有限维度$ d $中的数据/系统。我们还研究了我们的神经形态函数方法或现有的微分方程学习方法与交替最小化和多个轨迹的组合。我们发现以这种方式改造任何学习方法可以提高该方法对噪声的鲁棒性。在其原始形式中,方法与1%的高斯噪声相加,但在改造后,他们从具有10%高斯噪声的数据中学习了准确的矢量场。
While there has been a surge of recent interest in learning differential equation models from time series, methods in this area typically cannot cope with highly noisy data. We break this problem into two parts: (i) approximating the unknown vector field (or right-hand side) of the differential equation, and (ii) dealing with noise. To deal with (i), we describe a neural network architecture consisting of tensor products of one-dimensional neural shape functions. For (ii), we propose an alternating minimization scheme that switches between vector field training and filtering steps, together with multiple trajectories of training data. We find that the neural shape function architecture retains the approximation properties of dense neural networks, enables effective computation of vector field error, and allows for graphical interpretability, all for data/systems in any finite dimension $d$. We also study the combination of either our neural shape function method or existing differential equation learning methods with alternating minimization and multiple trajectories. We find that retrofitting any learning method in this way boosts the method's robustness to noise. While in their raw form the methods struggle with 1% Gaussian noise, after retrofitting, they learn accurate vector fields from data with 10% Gaussian noise.