论文标题
AI在消费者移动健康技术中对AI的偏见分析:法律,技术和政策
Bias Impact Analysis of AI in Consumer Mobile Health Technologies: Legal, Technical, and Policy
论文作者
论文摘要
当今的大规模算法和自动部署决策系统有可能排除边缘化社区。因此,出现的危险来自这种系统复制,加强或放大有害现有歧视行为的有效性和倾向。算法偏见暴露了一系列不必要的偏见的根深蒂固的编码,这些偏见可能会产生从就业,住房到医疗保健领域中体现出深刻的现实世界效应。有关这些效果的研究和示例的最后十年进一步强调了研究价值中性技术的任何主张的必要性。这项工作研究了消费者移动健康技术(MHealth)中算法偏见的交集。我们包括MHealth,该术语用于描述移动技术和相关传感器,以通过患者旅行提供医疗保健解决方案。作为研究的一部分,我们还包括心理和行为健康(心理和生理)。此外,我们探讨了当前机制(法律,技术或规范性)在多大程度上有助于减轻与构成MHealth领域的智能系统中不必要的偏见相关的潜在风险。我们提供有关技术人员和政策制定者的角色和责任的其他指导,以确保此类系统公平地赋予患者权力。
Today's large-scale algorithmic and automated deployment of decision-making systems threatens to exclude marginalized communities. Thus, the emergent danger comes from the effectiveness and the propensity of such systems to replicate, reinforce, or amplify harmful existing discriminatory acts. Algorithmic bias exposes a deeply entrenched encoding of a range of unwanted biases that can have profound real-world effects that manifest in domains from employment, to housing, to healthcare. The last decade of research and examples on these effects further underscores the need to examine any claim of a value-neutral technology. This work examines the intersection of algorithmic bias in consumer mobile health technologies (mHealth). We include mHealth, a term used to describe mobile technology and associated sensors to provide healthcare solutions through patient journeys. We also include mental and behavioral health (mental and physiological) as part of our study. Furthermore, we explore to what extent current mechanisms - legal, technical, and or normative - help mitigate potential risks associated with unwanted bias in intelligent systems that make up the mHealth domain. We provide additional guidance on the role and responsibilities technologists and policymakers have to ensure that such systems empower patients equitably.