论文标题
结合合奏和数据扩展会损害您的校准
Combining Ensembles and Data Augmentation can Harm your Calibration
论文作者
论文摘要
平均在多个神经网络预测的平均值是改善模型校准和鲁棒性的简单方法。同样,以不变特征转换形式编码先验信息的数据增强技术有效地改善校准和鲁棒性。在本文中,我们显示出令人惊讶的病理:组合合奏和数据增强可能会损害模型校准。这导致了实践中的权衡,从而通过结合两种技术来提高准确性是以校准为代价的。另一方面,仅选择其中一种技术可确保以准确性为代价的良好不确定性估计。我们研究了这种病理学,并确定了在方法上边缘化的重量和数据增强技术的方法之间的复杂性不足。最后,我们提出了一个简单的校正,以高度准确性和校准提高到仅使用合奏或数据增强,以实现两全其美。应用校正会在CIFAR-10,CIFAR-100和IMAGENET中产生新的不确定性校准最新技术。
Ensemble methods which average over multiple neural network predictions are a simple approach to improve a model's calibration and robustness. Similarly, data augmentation techniques, which encode prior information in the form of invariant feature transformations, are effective for improving calibration and robustness. In this paper, we show a surprising pathology: combining ensembles and data augmentation can harm model calibration. This leads to a trade-off in practice, whereby improved accuracy by combining the two techniques comes at the expense of calibration. On the other hand, selecting only one of the techniques ensures good uncertainty estimates at the expense of accuracy. We investigate this pathology and identify a compounding under-confidence among methods which marginalize over sets of weights and data augmentation techniques which soften labels. Finally, we propose a simple correction, achieving the best of both worlds with significant accuracy and calibration gains over using only ensembles or data augmentation individually. Applying the correction produces new state-of-the art in uncertainty calibration across CIFAR-10, CIFAR-100, and ImageNet.