论文标题

基于变压器的多元时间序列表示学习的框架

A Transformer-based Framework for Multivariate Time Series Representation Learning

论文作者

Zerveas, George, Jayaraman, Srideepika, Patel, Dhaval, Bhamidipaty, Anuradha, Eickhoff, Carsten

论文摘要

在这项工作中,我们首次提出了一个基于变压器的框架,用于无监督的多元时间序列学习。预训练的模型可以可能用于下游任务,例如回归和分类,预测和缺失价值插补。通过在多个基准数据集上评估我们的模型以进行多元时间序列回归和分类,我们表明,我们的建模方法不仅代表了迄今为止提出的多元时间序列学习的最成功的方法,而且还超过了被监督方法的当前最新性能;即使在提供计算效率的同时,训练样本的数量非常有限,也可以这样做。最后,我们证明,无监督的变压器模型的预培训为完全监督的学习提供了可观的性能优势,即使不利用其他未标记的数据,即通过通过无人监督的目标重复使用相同的数据样本。

In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series. Pre-trained models can be potentially used for downstream tasks such as regression and classification, forecasting and missing value imputation. By evaluating our models on several benchmark datasets for multivariate time series regression and classification, we show that not only does our modeling approach represent the most successful method employing unsupervised learning of multivariate time series presented to date, but also that it exceeds the current state-of-the-art performance of supervised methods; it does so even when the number of training samples is very limited, while offering computational efficiency. Finally, we demonstrate that unsupervised pre-training of our transformer models offers a substantial performance benefit over fully supervised learning, even without leveraging additional unlabeled data, i.e., by reusing the same data samples through the unsupervised objective.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源