论文标题
不确定性意识深度校准的显着对象检测
Uncertainty-Aware Deep Calibrated Salient Object Detection
论文作者
论文摘要
现有的基于神经网络的显着对象检测(SOD)方法主要集中于追求高网络的准确性。但是,这些方法忽略了网络准确性和预测置信度之间的差距,称为置信度不校准问题。因此,最新的草皮网络易于过分自信。换句话说,网络的预测信心并不能反映出明显对象检测正确性的实际可能性,这极大地阻碍了其现实世界中的适用性。在本文中,我们引入了一个不变的深层草皮网络,并从不同角度提出了两种策略,以防止深层SOD网络过度自信。第一种策略,即边界分布平滑(BDS),通过相对于像素的不确定性平滑原始的二进制基地真相来生成连续标签。第二种策略,即不确定性感知的温度缩放(UAT),在训练和测试过程中以空间变化的温度缩放来利用宽松的Sigmoid功能,从而产生柔和的输出。两种策略都可以通过最少的努力纳入现有的深层草皮网络中。此外,我们提出了一个新的显着性评估度量,即密集的校准度量C,以衡量如何在给定数据集上校准模型。七个基准数据集的广泛实验结果表明,我们的解决方案不仅可以更好地校准SOD模型,而且可以提高网络的准确性。
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy. However, those methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem. Thus, state-of-the-art SOD networks are prone to be overconfident. In other words, the predicted confidence of the networks does not reflect the real probability of correctness of salient object detection, which significantly hinder their real-world applicability. In this paper, we introduce an uncertaintyaware deep SOD network, and propose two strategies from different perspectives to prevent deep SOD networks from being overconfident. The first strategy, namely Boundary Distribution Smoothing (BDS), generates continuous labels by smoothing the original binary ground-truth with respect to pixel-wise uncertainty. The second strategy, namely Uncertainty-Aware Temperature Scaling (UATS), exploits a relaxed Sigmoid function during both training and testing with spatially-variant temperature scaling to produce softened output. Both strategies can be incorporated into existing deep SOD networks with minimal efforts. Moreover, we propose a new saliency evaluation metric, namely dense calibration measure C, to measure how the model is calibrated on a given dataset. Extensive experimental results on seven benchmark datasets demonstrate that our solutions can not only better calibrate SOD models, but also improve the network accuracy.