论文标题

测量机器学习公平度指标的非专家理解

Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics

论文作者

Saha, Debjani, Schumann, Candice, McElfresh, Duncan C., Dickerson, John P., Mazurek, Michelle L., Tschantz, Michael Carl

论文摘要

机器学习的偏见在几个领域(例如医学,招聘和刑事司法)表现出了不公正现象。作为回应,计算机科学家开发了无数的公平定义,以纠正现场算法中的这种偏见。尽管某些定义基于既定的法律和道德规范,但其他定义则主要是数学。目前尚不清楚公众是否同意这些公平性的定义,也许更重要的是,他们是否理解这些定义。通过解决以下问题,我们采取了最初的步骤来弥合ML研究人员与公众之间的差距:外行受众是否理解ML公平的基本定义?我们开发了一个指标,以衡量三个这样的定义的理解 - 人口统计学奇偶校验,均等机会和均衡的赔率。我们使用在线调查评估该指标,并研究理解与情感,人口统计学以及定义本身之间的关系。

Bias in machine learning has manifested injustice in several areas, such as medicine, hiring, and criminal justice. In response, computer scientists have developed myriad definitions of fairness to correct this bias in fielded algorithms. While some definitions are based on established legal and ethical norms, others are largely mathematical. It is unclear whether the general public agrees with these fairness definitions, and perhaps more importantly, whether they understand these definitions. We take initial steps toward bridging this gap between ML researchers and the public, by addressing the question: does a lay audience understand a basic definition of ML fairness? We develop a metric to measure comprehension of three such definitions--demographic parity, equal opportunity, and equalized odds. We evaluate this metric using an online survey, and investigate the relationship between comprehension and sentiment, demographics, and the definition itself.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源