论文标题

同质和激励效果在使用算法时

Homophily and Incentive Effects in Use of Algorithms

论文作者

Fogliato, Riccardo, Fazelpour, Sina, Gupta, Shantanu, Lipton, Zachary, Danks, David

论文摘要

随着算法工具越来越多地帮助专家做出结果决策,因此需要了解介导其影响的确切因素的需求已经相应地增长。在本文中,我们提出了一项众包小插图研究,旨在评估两个合理因素对AI信息决策的影响。首先,我们同性化研究 - 人们是否会推迟更多地倾向于同意它们的模型? - 通过在参与者与算法工具之间的培训期间操纵协议。其次,我们考虑了激励措施 - 人们如何在混合决策环境中融合(已知的)成本结构? - 通过改变与真正的积极因素与真正的负面影响相关的奖励。令人惊讶的是,尽管参与者的表现与以前的研究相似,但我们发现同性恋的影响有限,没有激励效果的证据。参与者与AI工具之间的一致性较高会产生更自信的预测,但只有在没有结果反馈时才进行。这些结果突出了表征人类和义相互作用的复杂性,并建议当人类与算法相互作用时,社会心理学的发现可能需要重新检查。

As algorithmic tools increasingly aid experts in making consequential decisions, the need to understand the precise factors that mediate their influence has grown commensurately. In this paper, we present a crowdsourcing vignette study designed to assess the impacts of two plausible factors on AI-informed decision-making. First, we examine homophily -- do people defer more to models that tend to agree with them? -- by manipulating the agreement during training between participants and the algorithmic tool. Second, we considered incentives -- how do people incorporate a (known) cost structure in the hybrid decision-making setting? -- by varying rewards associated with true positives vs. true negatives. Surprisingly, we found limited influence of either homophily and no evidence of incentive effects, despite participants performing similarly to previous studies. Higher levels of agreement between the participant and the AI tool yielded more confident predictions, but only when outcome feedback was absent. These results highlight the complexity of characterizing human-algorithm interactions, and suggest that findings from social psychology may require re-examination when humans interact with algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源