论文标题

基于神经网络的参数推断的贝叶斯错误传播

Bayesian error propagation for neural-net based parameter inference

论文作者

Grandón, Daniela, Sellentin, Elena

论文摘要

神经网已经变得很流行,可以加速参数推断,尤其是在即将到来的宇宙学中的银河系调查中。由于神经网本质上是近似性的,因此一个经常性的问题是如何传播神经网的近似误差,以避免参数推断中的偏见。我们提出了一种贝叶斯解决方案,用于传播神经网的近似误差,从而引用参数推断。我们认为,神经网报告在验证阶段报告其近似错误。我们通过高阶汇总统计数据捕获了如此报道的近似错误,从而使我们能够在推断过程中消除神经网的偏见,并传播其不确定性。我们证明,即使是针对强烈偏见的神经网,我们的方法也很快就可以实施并成功地渗透参数。总而言之,如果无法基于无限准确的理论代码计算后,我们的方法提供了判断后部准确性的缺失元素。

Neural nets have become popular to accelerate parameter inferences, especially for the upcoming generation of galaxy surveys in cosmology. As neural nets are approximative by nature, a recurrent question has been how to propagate the neural net's approximation error, in order to avoid biases in the parameter inference. We present a Bayesian solution to propagating a neural net's approximation error and thereby debiasing parameter inference. We exploit that a neural net reports its approximation errors during the validation phase. We capture the thus reported approximation errors via the highest-order summary statistics, allowing us to eliminate the neural net's bias during inference, and propagating its uncertainties. We demonstrate that our method is quickly implemented and successfully infers parameters even for strongly biased neural nets. In summary, our method provides the missing element to judge the accuracy of a posterior if it cannot be computed based on an infinitely accurately theory code.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源