论文标题

公平的偏好,实际和假设:人群奖励措施的研究

Fairness Preferences, Actual and Hypothetical: A Study of Crowdworker Incentives

论文作者

Peng, Angie, Naecker, Jeff, Hutchinson, Ben, Smart, Andrew, Moorosi, Nyalleng

论文摘要

我们应该如何决定在机器学习系统中采用哪些公平标准或定义?要回答这个问题,我们必须研究机器学习系统实际用户的公平偏好。对治疗或影响的严格奇偶校验限制可能会带来权衡,甚至可能不受有关社会群体的首选(Zafar等,2017)。因此,引起小组的偏好是可能是有益的,而不是依靠先验定义的数学公平约束。简单地要求自我报告的用户排名是具有挑战性的,因为研究表明,人们所陈述和实际偏好之间经常存在差距(Bernheim等,2013)。 本文概述了研究这些问题的研究计划和实验设计。邀请实验的参与者执行一组任务以换取基本付款 - 他们预先告诉他们,他们可能会在以后获得奖金,而奖励可能取决于产出数量和质量的某种组合。然后,同一批工人对奖金支付结构进行投票,以引起偏好。对于一半的一半的人来说,投票是假设的(与结果无关),而实际的实际付款结果(与实际的付款结果相关),以便我们可以理解一个小组的实际偏好与假设的(所说的)偏好之间的关系。探索了机器学习中公平的联系和经验教训。

How should we decide which fairness criteria or definitions to adopt in machine learning systems? To answer this question, we must study the fairness preferences of actual users of machine learning systems. Stringent parity constraints on treatment or impact can come with trade-offs, and may not even be preferred by the social groups in question (Zafar et al., 2017). Thus it might be beneficial to elicit what the group's preferences are, rather than rely on a priori defined mathematical fairness constraints. Simply asking for self-reported rankings of users is challenging because research has shown that there are often gaps between people's stated and actual preferences(Bernheim et al., 2013). This paper outlines a research program and experimental designs for investigating these questions. Participants in the experiments are invited to perform a set of tasks in exchange for a base payment--they are told upfront that they may receive a bonus later on, and the bonus could depend on some combination of output quantity and quality. The same group of workers then votes on a bonus payment structure, to elicit preferences. The voting is hypothetical (not tied to an outcome) for half the group and actual (tied to the actual payment outcome) for the other half, so that we can understand the relation between a group's actual preferences and hypothetical (stated) preferences. Connections and lessons from fairness in machine learning are explored.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源