论文标题
人工智能中的文化不一致
Cultural Incongruencies in Artificial Intelligence
论文作者
论文摘要
人工智能(AI)系统试图模仿人类行为。他们的模仿方式经常被用来评估其效用,并将类似人类(或人造的智力归因于他们)。但是,大多数AI上的工作是指并依靠人类的智力,而没有考虑到人类行为固有地由他们所嵌入的文化背景,其持有的价值观和信念以及所遵循的社会实践所塑造的事实。此外,由于AI技术主要是在几个国家中构思和开发的,因此它们嵌入了这些国家的文化价值和实践。同样,用于训练模型的数据也无法公平地代表全球文化多样性。因此,当这些技术与具有不同价值观和解释实践的全球各种社会和文化相互作用时,就会出现问题。在该立场论文中,我们在基于AI的语言和视觉技术的背景下描述了一系列文化依赖性和不一致,并反思解决这些不协调的可能性和潜在策略。
Artificial intelligence (AI) systems attempt to imitate human behavior. How well they do this imitation is often used to assess their utility and to attribute human-like (or artificial) intelligence to them. However, most work on AI refers to and relies on human intelligence without accounting for the fact that human behavior is inherently shaped by the cultural contexts they are embedded in, the values and beliefs they hold, and the social practices they follow. Additionally, since AI technologies are mostly conceived and developed in just a handful of countries, they embed the cultural values and practices of these countries. Similarly, the data that is used to train the models also fails to equitably represent global cultural diversity. Problems therefore arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices. In this position paper, we describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies, and reflect on the possibilities of and potential strategies towards addressing these incongruencies.