论文标题
可解释的和负责任的决策算法之间的冲突
The Conflict Between Explainable and Accountable Decision-Making Algorithms
论文作者
论文摘要
决策算法正在重要的决策中使用,例如谁应该加入医疗保健计划并被聘用。即使目前将这些系统部署在高风险场景中,但其中许多系统无法解释他们的决定。这种限制促使了可解释的人工智能(XAI)倡议,该倡议旨在使算法可以解释以符合法律要求,促进信任和维持问责制。本文质疑解释性是否可以帮助解决自主AI系统所构成的责任问题。我们建议,提供事后解释的XAI系统可以看作是怪罪的代理商,掩盖了开发人员在决策过程中的责任。此外,我们认为XAI可能会导致对弱势利益相关者的责任归因,例如那些经过算法决定(即患者)的人,由于他们对他们对可解释的算法的控制有错误的看法。如果设计师选择将算法和患者用作道德和法定替罪羊,则可以加剧解释性和问责制之间的冲突。最后,我们以一系列建议在社会技术决策过程中应对这种紧张的建议,并捍卫严格的监管,以防止设计师逃避责任。
Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired. Even though these systems are currently deployed in high-stakes scenarios, many of them cannot explain their decisions. This limitation has prompted the Explainable Artificial Intelligence (XAI) initiative, which aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability. This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems. We suggest that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process. Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i.e., patients), due to a misguided perception that they have control over explainable algorithms. This conflict between explainability and accountability can be exacerbated if designers choose to use algorithms and patients as moral and legal scapegoats. We conclude with a set of recommendations for how to approach this tension in the socio-technical process of algorithmic decision-making and a defense of hard regulation to prevent designers from escaping responsibility.