论文标题
黑色贷款很重要:分配强大的与亚组歧视的公平性
Black Loans Matter: Distributionally Robust Fairness for Fighting Subgroup Discrimination
论文作者
论文摘要
今天贷款中的算法公平性依赖于群体公平指标来监测跨受保护群体的统计奇偶校验。这种方法容易受到代理人的亚组歧视,对贷方造成法律和声誉损害的重大风险,借款人对借款人产生了公然不公平的结果。实际挑战是由受保护群体的许多可能组合和子集引起的。我们在美国的历史和残余种族主义背景下激发了这个问题,污染了所有可用的培训数据,并提高了公众对算法偏见的敏感性。我们回顾了当前的监管合规协议,以了解贷款方面的公平性,并讨论其相对于最先进的公平方法可能负担的局限性。我们提出了一种解决亚组歧视的解决方案,同时遵守现有的群体公平要求,从个人公平方法和相应的公平度量学习算法的最新发展中。
Algorithmic fairness in lending today relies on group fairness metrics for monitoring statistical parity across protected groups. This approach is vulnerable to subgroup discrimination by proxy, carrying significant risks of legal and reputational damage for lenders and blatantly unfair outcomes for borrowers. Practical challenges arise from the many possible combinations and subsets of protected groups. We motivate this problem against the backdrop of historical and residual racism in the United States polluting all available training data and raising public sensitivity to algorithimic bias. We review the current regulatory compliance protocols for fairness in lending and discuss their limitations relative to the contributions state-of-the-art fairness methods may afford. We propose a solution for addressing subgroup discrimination, while adhering to existing group fairness requirements, from recent developments in individual fairness methods and corresponding fair metric learning algorithms.