论文标题
公平的强大的积极学习通过共同的不一致
Fair Robust Active Learning by Joint Inconsistency
论文作者
论文摘要
公平和鲁棒性在值得信赖的机器学习中起着至关重要的作用。在各种注释廉价的视觉应用中观察至关重要的需求,我们介绍了一个新颖的学习框架,公平的积极学习(FRAL),将常规的主动学习推广到公平和对抗性的稳健场景。该框架使我们能够使用有限的获得标签实现标准和强大的minimax公平性。然后,在弗拉尔,我们观察到现有的公平感知数据选择策略在严重的数据不平衡或效率低下,由于对抗性训练的大量计算而遭受无效性。为了解决这两个问题,我们开发了一种新颖的关节不一致(JIN)方法,利用了良性和对抗性输入之间以及标准模型和强大模型之间的预测不一致。这两种不一致可用于识别潜在的公平收益和数据失衡。因此,通过使用我们的基于不一致的排名指标进行标签获取,我们可以减轻类不平衡问题并通过有限的计算来增强Minimax公平性。对不同数据集和敏感组的广泛实验表明,与现有的活动数据选择基线相比,我们的方法在白色框PGD攻击下获得了标准和健壮性的最佳结果。
Fairness and robustness play vital roles in trustworthy machine learning. Observing safety-critical needs in various annotation-expensive vision applications, we introduce a novel learning framework, Fair Robust Active Learning (FRAL), generalizing conventional active learning to fair and adversarial robust scenarios. This framework allows us to achieve standard and robust minimax fairness with limited acquired labels. In FRAL, we then observe existing fairness-aware data selection strategies suffer from either ineffectiveness under severe data imbalance or inefficiency due to huge computations of adversarial training. To address these two problems, we develop a novel Joint INconsistency (JIN) method exploiting prediction inconsistencies between benign and adversarial inputs as well as between standard and robust models. These two inconsistencies can be used to identify potential fairness gains and data imbalance mitigations. Thus, by performing label acquisition with our inconsistency-based ranking metrics, we can alleviate the class imbalance issue and enhance minimax fairness with limited computation. Extensive experiments on diverse datasets and sensitive groups demonstrate that our method obtains the best results in standard and robust fairness under white-box PGD attacks compared with existing active data selection baselines.