论文标题

关系而不是事物:算法问责制和评估文件的关系方法

A relationship and not a thing: A relational approach to algorithmic accountability and assessment documentation

论文作者

Metcalf, Jacob, Moss, Emanuel, Singh, Ranjit, Tafese, Emnet, Watkins, Elizabeth Anne

论文摘要

关于算法问责制的许多学术,监管和公众对话的核心是谁应该访问文档,以揭示算法系统的内部运作,预期的功能以及预期的后果,并有可能建立受影响公众以对这些系统进行运营的新途径。目前,开发人员在很大程度上垄断了有关其系统实际运作方式的信息,并激励他们对系统如何影响世界的各个方面保持自己的无知。立法者,监管机构和拥护者越来越多地转向评估文件,以解决公众对算法危害的经验与开发商对记录和证明其设计决策合理的义务之间的差距。但是,目前的地位和专业知识问题阻止公众围绕预防和纠正算法危害的共同利益。正如我们在多个案件中所证明的那样,法院经常发现计算危害是不可识别的,很少要求开发商解决重大危害主张。算法影响评估制度以三合会问责关系建立,可以通过建立公众获得报告和文件的程序权利来改变这种情况。我们认为,建立关系责任的方法,我们认为,强大的责任制制度必须为公众建立机会,以围绕共同的经验和兴趣,并对影响其生活的算法系统的成果进行质疑。此外,目前在许多司法管辖区正在考虑的算法问责制政策必须为公众提供足够的地位和机会,以访问和质疑参与者提供的文件以及论坛通过的判决。

Central to a number of scholarly, regulatory, and public conversations about algorithmic accountability is the question of who should have access to documentation that reveals the inner workings, intended function, and anticipated consequences of algorithmic systems, potentially establishing new routes for impacted publics to contest the operations of these systems. Currently, developers largely have a monopoly on information about how their systems actually work and are incentivized to maintain their own ignorance about aspects of how their systems affect the world. Increasingly, legislators, regulators and advocates have turned to assessment documentation in order to address the gap between the public's experience of algorithmic harms and the obligations of developers to document and justify their design decisions. However, issues of standing and expertise currently prevent publics from cohering around shared interests in preventing and redressing algorithmic harms; as we demonstrate with multiple cases, courts often find computational harms non-cognizable and rarely require developers to address material claims of harm. Constructed with a triadic accountability relationship, algorithmic impact assessment regimes could alter this situation by establishing procedural rights around public access to reporting and documentation. Developing a relational approach to accountability, we argue that robust accountability regimes must establish opportunities for publics to cohere around shared experiences and interests, and to contest the outcomes of algorithmic systems that affect their lives. Furthermore, algorithmic accountability policies currently under consideration in many jurisdictions must provide the public with adequate standing and opportunities to access and contest the documentation provided by the actors and the judgments passed by the forum.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源