论文标题

人类中的一个案例:在存在错误算法分数的情况下决策

A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores

论文作者

De-Arteaga, Maria, Fogliato, Riccardo, Chouldechova, Alexandra

论文摘要

敏感领域中算法预测的使用增加伴随着热情和关注。为了了解这些技术的机会和风险,研究专家在使用此类工具时如何改变决策是关键。在本文中,我们研究了用于协助儿童虐待热线筛查决策的算法工具的采用。我们专注于一个问题:人类是否能够识别机器错误的案例,并否决这些建议?我们首先表明,在部署工具时,人类确实会改变其行为。然后,我们表明,即使在覆盖建议需要监督批准的情况下,人类也不太可能遵守机器的建议是不正确的风险估计。这些结果突出了完全自动化的风险以及设计为人类提供自治的决策管道的重要性。

The increased use of algorithmic predictions in sensitive domains has been accompanied by both enthusiasm and concern. To understand the opportunities and risks of these technologies, it is key to study how experts alter their decisions when using such tools. In this paper, we study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions. We focus on the question: Are humans capable of identifying cases in which the machine is wrong, and of overriding those recommendations? We first show that humans do alter their behavior when the tool is deployed. Then, we show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk, even when overriding the recommendation requires supervisory approval. These results highlight the risks of full automation and the importance of designing decision pipelines that provide humans with autonomy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源