论文标题
与战略用户的算法透明度
Algorithmic Transparency with Strategic Users
论文作者
论文摘要
在决策中应用机器学习算法的公司是否应该使其算法透明其影响他们影响的用户?尽管越来越多地要求算法透明度,但大多数公司一直保持其算法不透明,理由是用户潜在的游戏可能会对算法的预测能力产生负面影响。我们开发了一个分析模型,以在战略用户的存在下进行有或没有算法透明度的企业和用户剩余,并提供新颖的见解。我们确定了一套广泛的条件,使该算法透明会受益于公司。我们表明,在某些情况下,即使机器学习算法的预测能力也可能会增加,如果公司使其透明。相比之下,在算法透明度下,用户可能并不总是更好。即使不透明算法的预测能力在很大程度上来自相关特征,并且用户改善它们的成本接近零,结果也会成立。总体而言,我们的结果表明,公司不应将用户的操纵视为不良。相反,他们应该使用算法透明度作为杠杆,以激励用户投资更理想的功能。
Should firms that apply machine learning algorithms in their decision-making make their algorithms transparent to the users they affect? Despite growing calls for algorithmic transparency, most firms have kept their algorithms opaque, citing potential gaming by users that may negatively affect the algorithm's predictive power. We develop an analytical model to compare firm and user surplus with and without algorithmic transparency in the presence of strategic users and present novel insights. We identify a broad set of conditions under which making the algorithm transparent benefits the firm. We show that, in some cases, even the predictive power of machine learning algorithms may increase if the firm makes them transparent. By contrast, users may not always be better off under algorithmic transparency. The results hold even when the predictive power of the opaque algorithm comes largely from correlational features and the cost for users to improve on them is close to zero. Overall, our results show that firms should not view manipulation by users as bad. Rather, they should use algorithmic transparency as a lever to motivate users to invest in more desirable features.