论文标题
RODD:一种自我监督的方法,用于强大的分布检测
RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection
论文作者
论文摘要
最近的研究探讨了检测和拒绝分布(OOD)样本作为安全部署(DL)模型的主要挑战的关注点。希望DL模型仅应对加强OOD检测的驾驶原则的分布(ID)数据充满信心。在本文中,我们提出了一种独立于分布数据集的简单而有效的广义检测方法。我们的方法依赖于训练样本的自我监督特征学习,嵌入在紧凑的低维空间上。最近的研究激发了表明自我监督的对抗性对比学习有助于强大模型,我们从经验上表明,具有自我监督的对比学习的预训练模型可以使潜在空间中的单位二维特征学习产生更好的模型。这项工作中提出的方法称为RODD在OOD检测任务上的广泛基准数据集上优于SOTA检测性能。与SOTA方法相比,在CIFAR-100基准测试基准中,Rodd达到26.97 $ \%$降低的假阳性率(FPR@95)。
Recent studies have addressed the concern of detecting and rejecting the out-of-distribution (OOD) samples as a major challenge in the safe deployment of deep learning (DL) models. It is desired that the DL model should only be confident about the in-distribution (ID) data which reinforces the driving principle of the OOD detection. In this paper, we propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets. Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space. Motivated by the recent studies that show self-supervised adversarial contrastive learning helps robustify the model, we empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space. The method proposed in this work referred to as RODD outperforms SOTA detection performance on an extensive suite of benchmark datasets on OOD detection tasks. On the CIFAR-100 benchmarks, RODD achieves a 26.97 $\%$ lower false-positive rate (FPR@95) compared to SOTA methods.