论文标题

男人还洗衣服:多属性偏见放大

Men Also Do Laundry: Multi-Attribute Bias Amplification

论文作者

Zhao, Dora, Andrews, Jerone T. A., Xiang, Alice

论文摘要

随着计算机视觉系统变得更广泛地部署,研究界和公众都越来越担心这些系统不仅在重现,而且会放大有害的社会偏见。这项工作的重点是偏置放大的现象是指在测试时放大固有训练集偏见的模型。现有指标测量相对于单个注释属性(例如$ \ texttt {Computer} $)的偏差放大。但是,几个视觉数据集由具有多个属性注释的图像组成。我们显示的模型可以学会利用与多个属性相关的相关性(例如{$ \ texttt {computer} $,$ \ texttt {键盘} $}),这不是由当前指标来考虑的。此外,我们表明当前的指标可以给出错误的印象,即由于涉及正值和负值汇总时,发生了最小或没有偏置扩增。此外,这些指标缺乏明确的期望价值,因此难以解释。为了解决这些缺点,我们提出了一个新的指标:多属性偏置放大。我们通过分析可可和IMSITU数据集上的性别偏差放大来验证我们的指标。最后,我们使用建议的指标基准缓解偏差方法,这提示了未来缓解偏见的途径

As computer vision systems become more widely deployed, there is increasing concern from both the research community and the public that these systems are not only reproducing but amplifying harmful social biases. The phenomenon of bias amplification, which is the focus of this work, refers to models amplifying inherent training set biases at test time. Existing metrics measure bias amplification with respect to single annotated attributes (e.g., $\texttt{computer}$). However, several visual datasets consist of images with multiple attribute annotations. We show models can learn to exploit correlations with respect to multiple attributes (e.g., {$\texttt{computer}$, $\texttt{keyboard}$}), which are not accounted for by current metrics. In addition, we show current metrics can give the erroneous impression that minimal or no bias amplification has occurred as they involve aggregating over positive and negative values. Further, these metrics lack a clear desired value, making them difficult to interpret. To address these shortcomings, we propose a new metric: Multi-Attribute Bias Amplification. We validate our proposed metric through an analysis of gender bias amplification on the COCO and imSitu datasets. Finally, we benchmark bias mitigation methods using our proposed metric, suggesting possible avenues for future bias mitigation

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源