论文标题

MTSMAE:用于多元时间序列预测的蒙版自动编码器

MTSMAE: Masked Autoencoders for Multivariate Time-Series Forecasting

论文作者

Tang, Peiwang, Zhang, Xianchao

论文摘要

大规模的自我监督预训练的预训练变压器体系结构显着提高了自然语言处理(NLP)和计算机视觉(CV)的各种任务的性能。但是,缺乏对预先训练的变压器处理多元时间序列的研究,尤其是,当前对掩盖自我监督学习时间序列的研究仍然是一个差距。与语言和图像处理不同,时间序列的信息密度增加了研究的困难。挑战进一步涉及先前的补丁嵌入方法和掩模方法的无效性。在本文中,根据多元时间序列的数据特征,提出了一种补丁嵌入方法,我们提出了一种基于胶面胶化自动编码器(MAE)的自我监督的预训练方法,称为MTSMAE,可以在无需预训练的情况下就可以显着改善监督学习的性能。在来自不同字段的几个常见多元时间序列数据集上评估我们的方法,并且具有不同的特征,实验结果表明,我们方法的性能明显优于当前可用的最佳方法。

Large-scale self-supervised pre-training Transformer architecture have significantly boosted the performance for various tasks in natural language processing (NLP) and computer vision (CV). However, there is a lack of researches on processing multivariate time-series by pre-trained Transformer, and especially, current study on masking time-series for self-supervised learning is still a gap. Different from language and image processing, the information density of time-series increases the difficulty of research. The challenge goes further with the invalidity of the previous patch embedding and mask methods. In this paper, according to the data characteristics of multivariate time-series, a patch embedding method is proposed, and we present an self-supervised pre-training approach based on Masked Autoencoders (MAE), called MTSMAE, which can improve the performance significantly over supervised learning without pre-training. Evaluating our method on several common multivariate time-series datasets from different fields and with different characteristics, experiment results demonstrate that the performance of our method is significantly better than the best method currently available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源