论文标题
视觉和语言研究中更公平的神经模型的辩护方法:一项调查
Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey
论文作者
论文摘要
尽管负责最先进的导致了几项计算机视觉和自然语言处理任务,但由于目前的某些缺点,神经网络仍面临严厉的批评。其中之一是神经网络是容易在数据中建模偏差的相关机,而不是关注实际有用的因果关系。这个问题在受种族,性别和年龄等方面影响的应用领域中尤为严重。为了防止模型引起不公平的决策,AI社区集中精力纠正算法偏见,从而引起了现在被广泛称为AI公平的研究领域。在本调查文件中,我们在视觉和语言研究的背景下对公平感知的神经网络的主要偏见方法提供了深入的概述。我们提出了一种新颖的分类法,以更好地组织有关公平性依据方法的文献,并讨论了有关感兴趣的研究人员和从业者的当前挑战,趋势以及未来的重要工作指导。
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.