论文标题

迈向多目标统计公平的联合学习

Towards Multi-Objective Statistically Fair Federated Learning

论文作者

Mehrabi, Ninareh, de Lichy, Cyprien, McKay, John, He, Cynthia, Campbell, William

论文摘要

由于数据所有权和隐私问题,联合学习(FL)已出现,以防止数据在培训程序中包括的多个当事方共享。尽管诸如隐私之类的问题在该领域引起了极大的关注,但在FL环境中满足统计公平措施并没有给予太多关注。考虑到这一目标,我们进行了研究,以表明FL能够满足由不同类型客户组成的不同数据制度下的不同公平指标。更具体地说,由于培训数据集中的现有偏见,不合作或对抗性客户端可能通过注入偏见或中毒的模型来污染全球FL模型。这些偏见可能是训练集(Zhang and Zhou 2019),历史偏见(Mehrabi等,2021a)的结果,或来自数据中毒攻击公平性的数据中有毒的数据点(Mehrabi等,2021b; Solans,Biggio,Biggio和Castillo 2020)。因此,我们提出了一个新的FL框架,该框架能够满足包括各种统计公平指标在内的多个目标。通过实验,我们展示了该方法与各种基线的有效性

Federated Learning (FL) has emerged as a result of data ownership and privacy concerns to prevent data from being shared between multiple parties included in a training procedure. Although issues, such as privacy, have gained significant attention in this domain, not much attention has been given to satisfying statistical fairness measures in the FL setting. With this goal in mind, we conduct studies to show that FL is able to satisfy different fairness metrics under different data regimes consisting of different types of clients. More specifically, uncooperative or adversarial clients might contaminate the global FL model by injecting biased or poisoned models due to existing biases in their training datasets. Those biases might be a result of imbalanced training set (Zhang and Zhou 2019), historical biases (Mehrabi et al. 2021a), or poisoned data-points from data poisoning attacks against fairness (Mehrabi et al. 2021b; Solans, Biggio, and Castillo 2020). Thus, we propose a new FL framework that is able to satisfy multiple objectives including various statistical fairness metrics. Through experimentation, we then show the effectiveness of this method comparing it with various baselines, its ability in satisfying different objectives collectively and individually, and its ability in identifying uncooperative or adversarial clients and down-weighing their effect

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源