论文标题
欧洲AI责任指令 - 对三心二意的方法的批评和未来的教训
The European AI Liability Directives -- Critique of a Half-Hearted Approach and Lessons for the Future
论文作者
论文摘要
如Chatgpt等。征服世界,AI系统的最佳责任框架仍然是全球未解决的问题。欧盟委员会提出了备受期待的举动,提出了两项提议,概述了2022年9月的欧洲AI责任方法:一项新颖的AI责任指令和对产品责任指令的修订。它们构成了欧盟AI法规的最后基石。至关重要的是,责任提案和《欧盟AI法》固有地交织在一起:后者不包含受影响人的任何个人权利,而前者缺乏有关AI发展和部署的具体实质性规则。综上所述,这些行为很可能会引发Brussels在AI规定中的影响,并对美国及其他地区产生重大影响。 本文做出了三个新颖的贡献。首先,它详细审查了委员会的建议,并表明,在朝着正确的方向迈出步骤时,它们最终代表了三心二意的方法:如果按照预见的方式颁布,欧盟的AI责任将主要取决于证据机制的披露和一组狭义定义的有关错误,赤字,赤字和因果关系和因果关系和因果关系。因此,其次,文章建议修正案,这些修正案是在本文结尾处收集的。第三,基于对AI构成的主要风险的分析,论文的最后一部分绘制了欧盟及以后的AI责任和法规未来的道路。这包括:用于AI责任的综合框架;支持创新的规定;非歧视/算法公平的扩展以及可解释的AI;和可持续性。我建议通过《 AI法案》中的可持续性影响评估和责任制度中的可持续设计缺陷来启动可持续性AI调节。这样,法律不仅可以帮助刺激公平的AI和XAI,而且可能有可能可持续的AI(SAI)。
As ChatGPT et al. conquer the world, the optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final cornerstone of EU AI regulation. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels Effect in AI regulation, with significant consequences for the US and beyond. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).