论文标题

NEO:高能量物理的端到端优化的摘要统计数据

neos: End-to-End-Optimised Summary Statistics for High Energy Physics

论文作者

Simpson, Nathan, Heinrich, Lukas

论文摘要

深度学习的出现产生了强大的工具,可以自动计算计算的梯度。这是因为训练神经网络等同于使用梯度下降来迭代更新其参数以找到损失函数的最小值。深度学习是更广泛的范式的子集。具有端到端优化的免费参数的工作流程,只要人们可以一直跟踪梯度。这项工作介绍了NEOS:一个示例实现,该实现是按照完全不同的高能物理工作流的这种范式的实现,能够优化有关分析的预期灵敏度的可学习摘要统计量。这样做会导致一个优化过程,该过程意识到系统不确定性的建模和处理。

The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because training a neural network equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This work introduces neos: an example implementation following this paradigm of a fully differentiable high-energy physics workflow, capable of optimising a learnable summary statistic with respect to the expected sensitivity of an analysis. Doing this results in an optimisation process that is aware of the modelling and treatment of systematic uncertainties.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源