论文标题
一vs。相互偏见的一种缓解:一种扩展公平感知二进制分类的一般方法
One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification
论文作者
论文摘要
随着现实世界中机器学习的广泛采用,歧视性偏见的影响引起了人们的关注。近年来,已经提出了各种减轻偏见的方法。但是,他们中的大多数人没有考虑交叉偏见,这会带来不公平的情况,在考虑多个敏感属性时,属于受保护群体的特定亚组的人会受到更糟的待遇。为了减轻这种偏见,在本文中,我们提出了一种称为单VS的方法,通过应用与敏感属性相关的每对子组之间的比较过程,以对公平感知到的机器学习进行二进制分类。我们使用三种方法(预处理,内部处理和后处理),六个指标(人口统计学平等的比率和差异,均衡的赔率和均等机会,均等机会)以及两个现实数据集(成人和成人)(成人和成人)(成人和成人)(成人)(成人和成人)(成人)(成人票房和差异),比较了全面环境中的传统公平性二进制分类方法。结果,在所有设置中,我们的方法比常规方法更好地减轻了交叉偏差。结果,我们打开了公平感知的二进制分类的潜力,以解决有多个敏感属性时发生的更现实的问题。
With the widespread adoption of machine learning in the real world, the impact of the discriminatory bias has attracted attention. In recent years, various methods to mitigate the bias have been proposed. However, most of them have not considered intersectional bias, which brings unfair situations where people belonging to specific subgroups of a protected group are treated worse when multiple sensitive attributes are taken into consideration. To mitigate this bias, in this paper, we propose a method called One-vs.-One Mitigation by applying a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification. We compare our method and the conventional fairness-aware binary classification methods in comprehensive settings using three approaches (pre-processing, in-processing, and post-processing), six metrics (the ratio and difference of demographic parity, equalized odds, and equal opportunity), and two real-world datasets (Adult and COMPAS). As a result, our method mitigates the intersectional bias much better than conventional methods in all the settings. With the result, we open up the potential of fairness-aware binary classification for solving more realistic problems occurring when there are multiple sensitive attributes.