论文标题
负责人AI系统的宏道德原则:分类学和未来的方向
Macro Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions
论文作者
论文摘要
负责人的AI必须能够做出或支持考虑人类价值观的决策,并且可以被人类的道德证明。通过采用宏观伦理学的观点来支持负责任决策的价值观和道德,该视角通过结合社会背景的整体镜头来看待伦理。可以使用哲学推断出的规范道德原则是为了有条不紊地推理道德,并在特定情况下做出道德判断。因此,在宏观伦理学的角度,行动规范性伦理原则促进了负责任的推理。我们对AI和计算机科学文献进行了调查,并开发了21种规范道德原则的分类法,可以在AI中进行操作。我们描述了每项原则以前是如何被运营的,强调了寻求实施道德原则的AI从业者应意识到的关键主题。我们设想,这种分类法将有助于发展方法,以将规范性伦理原则纳入负责人AI系统的推理能力。
Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.