论文标题

通过稀疏注意机制的油井间隔的强大表示

Robust representations of oil wells' intervals via sparse attention mechanism

论文作者

Ermilova, Alina, Baramiia, Nikita, Kornilov, Valerii, Petrakov, Sergey, Zaytsev, Alexey

论文摘要

基于变压器的神经网络体系结构实现了最新的最新领域,从自然语言处理(NLP)到计算机视觉(CV)。变形金刚(注意机制)的关键思想已经在许多领域带来了重大突破。注意力也发现了时间序列数据的实施。但是,由于注意力序列长度的注意力计算的二次复杂性,变压器的应用受到高资源需求的限制。此外,它们对工业时间序列的修改需要对缺失或列的值进行鲁棒,这使其应用程序的扩展变得复杂。为了解决这些问题,我们介绍了名为正则变压器(Reguformers)的有效变压器类别。我们实施了受辍学想法启发的正则化技术,以提高鲁棒性并减少计算费用。我们实验的重点是油气数据,即井原木,这是多元时间序列的重要例子。目的是解决他们的相似性和代表性学习的问题。为了评估我们的模型,我们使用的是一个行业尺度的开放数据集,该数据集由20多个井的井木组成。该实验表明,在井眼间的分类和获得的井眼介入表示的质量方面,所有调节器的所有变化都优于先前开发的RNN,经典变压器模型,以及IT等强大的修改。此外,在我们的模型中缺少和错误数据的可持续性超过了他人的可持续性。在井的间隔相似性任务上实现的调节器的最佳结果是平均pr〜AUC得分等于0.983,这与经典的变压器相当,并且比以前的模型优于先前的模型。

Transformer-based neural network architectures achieve state-of-the-art results in different domains, from natural language processing (NLP) to computer vision (CV). The key idea of Transformers, the attention mechanism, has already led to significant breakthroughs in many areas. The attention has found their implementation for time series data as well. However, due to the quadratic complexity of the attention calculation regarding input sequence length, the application of Transformers is limited by high resource demands. Moreover, their modifications for industrial time series need to be robust to missing or noised values, which complicates the expansion of the horizon of their application. To cope with these issues, we introduce the class of efficient Transformers named Regularized Transformers (Reguformers). We implement the regularization technique inspired by the dropout ideas to improve robustness and reduce computational expenses. The focus in our experiments is on oil&gas data, namely, well logs, a prominent example of multivariate time series. The goal is to solve the problems of similarity and representation learning for them. To evaluate our models for such problems, we work with an industry-scale open dataset consisting of well logs of more than 20 wells. The experiments show that all variations of Reguformers outperform the previously developed RNNs, classical Transformer model, and robust modifications of it like Informer and Performer in terms of well-intervals' classification and the quality of the obtained well-intervals' representations. Moreover, the sustainability to missing and incorrect data in our models exceeds that of others by a significant margin. The best result that the Reguformer achieves on well-interval similarity task is the mean PR~AUC score equal to 0.983, which is comparable to the classical Transformer and outperforms the previous models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源