论文标题

什么公平?关于受保护属性和虚拟世界的作用

What Is Fairness? On the Role of Protected Attributes and Fictitious Worlds

论文作者

Bothmann, Ludwig, Peters, Kristina, Bischl, Bernd

论文摘要

通过定义衡量ML模型的公平性并提出方法以确保训练有素的ML模型在这些指标上获得低分的方法,越来越多的文献(FAIRML)旨在减轻机器学习(ML)相关的自动决策(ADM)相关的不公平性。但是,很少讨论公平的基本概念,即公平性是什么是什么公平性的问题,几个世纪的哲学讨论与ML社区中该概念的最新采用之间存在很大的差距。在这项工作中,我们试图通过形式化一致的公平概念,并将哲学考虑转化为ADM系统中ML模型的培训和评估的正式框架来弥合这一差距。我们认为,即使没有受保护的属性(PAS)的存在,也会出现公平性问题,并指出公平性和预测性能不是不可调和的对立面,但后者是实现前者的必要条件。此外,我们认为,当PAS在PAS的存在下评估PAS的存在时,为何以及如何有因果考虑,而PAS没有因果关系,在PAS的情况下进行了假设的考虑。实际上,这个发现世界必须通过一个扭曲的世界近似,在这个世界中,从现实世界数据中消除了PA的因果影响。最后,我们在Fairml的讨论中实现了更大的语言清晰度。我们概述了用于实际应用的算法,并在Compas数据上进行了说明性实验。

A growing body of literature in fairness-aware machine learning (fairML) aims to mitigate machine learning (ML)-related unfairness in automated decision-making (ADM) by defining metrics that measure fairness of an ML model and by proposing methods to ensure that trained ML models achieve low scores on these metrics. However, the underlying concept of fairness, i.e., the question of what fairness is, is rarely discussed, leaving a significant gap between centuries of philosophical discussion and the recent adoption of the concept in the ML community. In this work, we try to bridge this gap by formalizing a consistent concept of fairness and by translating the philosophical considerations into a formal framework for the training and evaluation of ML models in ADM systems. We argue that fairness problems can arise even without the presence of protected attributes (PAs), and point out that fairness and predictive performance are not irreconcilable opposites, but that the latter is necessary to achieve the former. Furthermore, we argue why and how causal considerations are necessary when assessing fairness in the presence of PAs by proposing a fictitious, normatively desired (FiND) world in which PAs have no causal effects. In practice, this FiND world must be approximated by a warped world in which the causal effects of the PAs are removed from the real-world data. Finally, we achieve greater linguistic clarity in the discussion of fairML. We outline algorithms for practical applications and present illustrative experiments on COMPAS data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源