论文标题

其中一个,许多:使用语言模型模拟人类样本

Out of One, Many: Using Language Models to Simulate Human Samples

论文作者

Argyle, Lisa P., Busby, Ethan C., Fulda, Nancy, Gubler, Joshua, Rytting, Christopher, Wingate, David

论文摘要

我们提出并探讨了可以将语言模型作为社会科学研究中特定人类亚人群的有效代理进行研究的可能性。人工智能工具的实用和研究应用有时受到有问题的偏见(例如种族主义或性别歧视)的限制,这些偏见通常被视为模型的统一特性。我们表明,一种这样的工具中的“算法偏差”(GPT-3语言模型)既是细粒度又具有人口统计相关性,这意味着适当的条件将导致其准确地仿真来自各种人类亚组的响应分布。我们将此属性称为“算法忠诚度”,并在GPT-3中探索了其范围。我们通过在美国进行多项大型调查中的数千个社会人口统计学背景下对模型进行调节,从而创建“硅样本”。然后,我们比较硅和人类样品,以证明GPT-3中包含的信息远远超出了表面相似性。它是细微的,多方面的,并反映了描述人类态度的思想,态度和社会文化背景之间的复杂相互作用。我们建议,具有足够算法的忠诚度的语言模型构成了一种新颖而有力的工具,可以促进各种学科的人类和社会的理解。

We propose and explore the possibility that language models can be studied as effective proxies for specific human sub-populations in social science research. Practical and research applications of artificial intelligence tools have sometimes been limited by problematic biases (such as racism or sexism), which are often treated as uniform properties of the models. We show that the "algorithmic bias" within one such tool -- the GPT-3 language model -- is instead both fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups. We term this property "algorithmic fidelity" and explore its extent in GPT-3. We create "silicon samples" by conditioning the model on thousands of socio-demographic backstories from real human participants in multiple large surveys conducted in the United States. We then compare the silicon and human samples to demonstrate that the information contained in GPT-3 goes far beyond surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and socio-cultural context that characterize human attitudes. We suggest that language models with sufficient algorithmic fidelity thus constitute a novel and powerful tool to advance understanding of humans and society across a variety of disciplines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源