论文标题
大型强子对撞机的计算未来
Future of computing at the Large Hadron Collider
论文作者
论文摘要
LHC上的高能量物理(HEP)实验以每秒$ \ Mathcal {O}(10)$ terabits的速率生成数据。由于将来将升级实验以实现更高的碰撞能量,因此该数据速率有望成倍增加。粒子物理数据集的尺寸不断增加,与平稳的单核CPU性能相结合,预计到2030年将在计算能力中造成四倍的短缺。这使得有必要研究替代计算体系结构以应对下一代HEP实验。这项研究概述了LHCB实验中使用的不同计算技术(触发,轨道重建,顶点重建,粒子识别)。此外,这项研究导致了LHCB实验创建了三种事件重建算法。这些算法在各种计算体系结构(例如CPU,GPU)和一种称为IPU的新型处理器等各种计算架构上进行了基准测试,每个处理器大致包含$ \ Mathcal {o}(10)$,$ \ Mathcal {o}(O}(1000)$,以及$ \ \ \ \ \ Mathcal {o}(1000)(1000)$ coles $ coles。这项研究表明,诸如GPU和IPU之类的多核体系结构更适合于HEP实验中的计算密集型任务。
High energy physics (HEP) experiments at the LHC generate data at a rate of $\mathcal{O}(10)$ Terabits per second. This data rate is expected to exponentially increase as experiments will be upgraded in the future to achieve higher collision energies. The increasing size of particle physics datasets combined with the plateauing single-core CPU performance is expected to create a four-fold shortage in computing power by 2030. This makes it necessary to investigate alternate computing architectures to cope with the next generation of HEP experiments. This study provides an overview of different computing techniques used in the LHCb experiment (trigger, track reconstruction, vertex reconstruction, particle identification). Furthermore, this research led to the creation of three event reconstruction algorithms for the LHCb experiment. These algorithms are benchmarked on various computing architectures such as the CPU, GPU, and a new type of processor called the IPU, each roughly containing $\mathcal{O}(10)$, $\mathcal{O}(1000)$, and $\mathcal{O}(1000)$ cores respectively. This research indicates that multi-core architectures such as GPUs and IPUs are better suited for computationally intensive tasks within HEP experiments.