论文标题
合作背景土匪的无公制个人公平
Metric-Free Individual Fairness with Cooperative Contextual Bandits
论文作者
论文摘要
数据挖掘算法越来越多地用于各行各业的自动决策。不幸的是,正如在几项研究中报道的那样,这些算法从数据和环境中注入偏差,导致不平等和不公平的解决方案。为了减轻机器学习的偏见,已经提出了可以归类为群体公平和个人公平的不同公平形式化的。群体公平要求应类似地对待不同的群体,这可能对小组中的某些人不公平。另一方面,个人公平要求类似的人类似地对待类似的人。但是,由于依赖特定问题的相似性指标,个人的公平性仍然被忽略了。我们提出了一个无度的个人公平性和合作的背景土匪(CCB)算法。 CCB算法利用公平作为奖励,并试图最大化它。将公平视为奖励的优势在于,公平标准不需要可区分。所提出的算法在多个现实世界的基准数据集上进行了测试。结果表明,所提出的算法在减轻偏见和实现个人和群体公平方面的有效性。
Data mining algorithms are increasingly used in automated decision making across all walks of daily life. Unfortunately, as reported in several studies these algorithms inject bias from data and environment leading to inequitable and unfair solutions. To mitigate bias in machine learning, different formalizations of fairness have been proposed that can be categorized into group fairness and individual fairness. Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group. On the other hand, individual fairness requires that similar individuals be treated similarly. However, individual fairness remains understudied due to its reliance on problem-specific similarity metrics. We propose a metric-free individual fairness and a cooperative contextual bandits (CCB) algorithm. The CCB algorithm utilizes fairness as a reward and attempts to maximize it. The advantage of treating fairness as a reward is that the fairness criterion does not need to be differentiable. The proposed algorithm is tested on multiple real-world benchmark datasets. The results show the effectiveness of the proposed algorithm at mitigating bias and at achieving both individual and group fairness.