论文标题
深度神经网络的预测间隔
Prediction intervals for Deep Neural Networks
论文作者
论文摘要
本文的目的是提出一种合适的方法,用于构建神经网络模型输出的预测间隔。为此,我们适应了最初为随机森林开发的极为随机的树方法来构建神经网络的集合。集合中引入的随机性降低了预测的方差,并在样本外准确性中产生了增长。广泛的蒙特卡洛模拟练习显示了这种新型方法的良好性能,该方法以覆盖率概率和均值预测误差来构建预测间隔。这种方法优于文献中现存的最先进方法,例如广泛使用的MC辍学和引导程序。使用文献中已经采用的实验设置进一步评估了新型算法的样本外准确性。
The aim of this paper is to propose a suitable method for constructing prediction intervals for the output of neural network models. To do this, we adapt the extremely randomized trees method originally developed for random forests to construct ensembles of neural networks. The extra-randomness introduced in the ensemble reduces the variance of the predictions and yields gains in out-of-sample accuracy. An extensive Monte Carlo simulation exercise shows the good performance of this novel method for constructing prediction intervals in terms of coverage probability and mean square prediction error. This approach is superior to state-of-the-art methods extant in the literature such as the widely used MC dropout and bootstrap procedures. The out-of-sample accuracy of the novel algorithm is further evaluated using experimental settings already adopted in the literature.