论文标题
走向Winoqueer:在大语言模型中为反标题偏见开发基准
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models
论文作者
论文摘要
本文介绍了探索性的工作,介绍了以及在何种程度上对酷儿和跨性别者的偏见是用大型语言模型(LLM)(例如伯特)编码的。我们还提出了一种减少下游任务中这些偏见的方法:在和/或关于酷儿人编写的数据上填充模型。为了衡量抗Quase偏见,我们引入了一个新的基准数据集Winoqueer,以其他偏置检测基准测试,但要解决同性恐惧和跨性别偏见。我们发现伯特表现出很大的同性恋偏见,但是这种偏见可以通过finetuning bert对LGBTQ+社区成员撰写的自然语言语料库进行缓解。
This paper presents exploratory work on whether and to what extent biases against queer and trans people are encoded in large language models (LLMs) such as BERT. We also propose a method for reducing these biases in downstream tasks: finetuning the models on data written by and/or about queer people. To measure anti-queer bias, we introduce a new benchmark dataset, WinoQueer, modeled after other bias-detection benchmarks but addressing homophobic and transphobic biases. We found that BERT shows significant homophobic bias, but this bias can be mostly mitigated by finetuning BERT on a natural language corpus written by members of the LGBTQ+ community.