论文标题
偏见:图形嵌入的视觉不公平诊断
BiaScope: Visual Unfairness Diagnosis for Graph Embeddings
论文作者
论文摘要
机器学习模型中偏见(即系统不公平)的问题最近吸引了研究人员和从业者的注意。特别是对于图形挖掘社区而言,算法公平性的重要目标是检测和减轻纳入图嵌入中的偏见,因为它们通常用于以人为本的以人为本的应用中,例如社交媒体建议。但是,用于检测偏见的简单分析方法通常涉及总统计数据,这些统计数据并未揭示不公平的来源。取而代之的是,视觉方法可以提供图形嵌入的整体公平表征,并有助于发现观察到的偏差的原因。在这项工作中,我们提出了Biascope,这是一种交互式可视化工具,该工具支持图形嵌入的端到端视觉不公平诊断。该工具是与领域专家合作的设计研究的产物。它允许用户(i)在视觉上比较两个关于公平性的嵌入,(ii)找到不公平嵌入的节点或图形群落,并且(iii)通过与相应的图形拓扑相关的嵌入子空间进行交互性链接来理解偏见的来源。专家的反馈证实,我们的工具有效地检测和诊断不公平。因此,我们设想了我们的工具作为研究人员设计其算法的同伴,也是使用现成图形嵌入的从业者的指南。
The issue of bias (i.e., systematic unfairness) in machine learning models has recently attracted the attention of both researchers and practitioners. For the graph mining community in particular, an important goal toward algorithmic fairness is to detect and mitigate bias incorporated into graph embeddings since they are commonly used in human-centered applications, e.g., social-media recommendations. However, simple analytical methods for detecting bias typically involve aggregate statistics which do not reveal the sources of unfairness. Instead, visual methods can provide a holistic fairness characterization of graph embeddings and help uncover the causes of observed bias. In this work, we present BiaScope, an interactive visualization tool that supports end-to-end visual unfairness diagnosis for graph embeddings. The tool is the product of a design study in collaboration with domain experts. It allows the user to (i) visually compare two embeddings with respect to fairness, (ii) locate nodes or graph communities that are unfairly embedded, and (iii) understand the source of bias by interactively linking the relevant embedding subspace with the corresponding graph topology. Experts' feedback confirms that our tool is effective at detecting and diagnosing unfairness. Thus, we envision our tool both as a companion for researchers in designing their algorithms as well as a guide for practitioners who use off-the-shelf graph embeddings.