论文标题

溪流连接的性能建模和垂直自动化

Performance Modeling and Vertical Autoscaling of Stream Joins

论文作者

Najdataei, Hannaneh, Gulisano, Vincenzo, Papadopoulos, Alessandro V., Walulya, Ivan, Papatriantafilou, Marina, Tsigas, Philippas

论文摘要

流分析在云以及边缘基础架构中广泛使用。在这些情况下,细粒度的应用程序性能可以基于流媒体运算符的准确建模。这对于诸如自适应流(自适应流对速率变化的数据流非常敏感)等计算昂贵的操作员尤其有益,否则将需要昂贵的频繁监视。 我们提出了一个动态模型,用于以不同的并行度度运行的自适应流的处理吞吐量和潜伏期。从集中式的非确定性到确定性并联流结合,该模型具有渐进的复杂性,描述了吞吐量和延迟动力学如何受各种配置参数的影响。该模型是为了理解流与不同系统部署的行为的催化性,因为我们使用基于模型的自动化方法来显示,以改变执行过程中流的并行度级别。对于广泛的参数,我们的详尽评估证实了该模型可以可靠地预测吞吐量和延迟指标,其精度相当高,估计的中位误差范围约为0.1%至6.5%,即使对于超载系统。此外,我们表明我们的模型允许通过仅基于观察到的输入负载来估算所需资源来有效地控制自适应流。特别是,即使输入负载的重大变化经常发生(在几秒钟的领域),我们也可以证明它可以用于实现有效的自动化。

Streaming analysis is widely used in cloud as well as edge infrastructures. In these contexts, fine-grained application performance can be based on accurate modeling of streaming operators. This is especially beneficial for computationally expensive operators like adaptive stream joins that, being very sensitive to rate-varying data streams, would otherwise require costly frequent monitoring. We propose a dynamic model for the processing throughput and latency of adaptive stream joins that run with different parallelism degrees. The model is presented with progressive complexity, from a centralized non-deterministic up to a deterministic parallel stream join, describing how throughput and latency dynamics are influenced by various configuration parameters. The model is catalytic for understanding the behavior of stream joins against different system deployments, as we show with our model-based autoscaling methodology to change the parallelism degree of stream joins during the execution. Our thorough evaluation, for a broad spectrum of parameter, confirms the model can reliably predict throughput and latency metrics with a fairly high accuracy, with the median error in estimation ranging from approximately 0.1% to 6.5%, even for an overloaded system. Furthermore, we show that our model allows to efficiently control adaptive stream joins by estimating the needed resources solely based on the observed input load. In particular, we show it can be employed to enable efficient autoscaling, even when big changes in the input load happen frequently (in the realm of seconds).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源