论文标题

部分可观测时空混沌系统的无模型预测

A Survey on Preserving Fairness Guarantees in Changing Environments

论文作者

Barrainkua, Ainhize, Gordaliza, Paula, Lozano, Jose A., Quadrianto, Novi

论文摘要

人类的生活越来越受到自动决策系统结果的影响,而对于后者来说,不仅准确,而且公平。在过去的十年中,算法公平性的文献已经大大增长,在这种强烈假设下,大多数方法是从相同的基本分布中独立且相同得出的。但是,实际上,存在培训环境和部署环境之间的差异,这损害了决策算法的性能以及其在部署数据中的公平性保证。有一条新兴的研究线研究如何在源(火车)和目标(测试)域之间的数据生成过程不同时,可以保证公平性保证,这正在显着增长。通过此调查,我们旨在提供有关该主题的广泛而统一的概述。为此,我们提出了对分配变化中现有的公平分类方法的分类法,强调了基准测试替代方案,指出与其他类似研究领域的关系,并最终确定未来的研究场所。

Human lives are increasingly being affected by the outcomes of automated decision-making systems and it is essential for the latter to be, not only accurate, but also fair. The literature of algorithmic fairness has grown considerably over the last decade, where most of the approaches are evaluated under the strong assumption that the train and test samples are independently and identically drawn from the same underlying distribution. However, in practice, dissimilarity between the training and deployment environments exists, which compromises the performance of the decision-making algorithm as well as its fairness guarantees in the deployment data. There is an emergent research line that studies how to preserve fairness guarantees when the data generating processes differ between the source (train) and target (test) domains, which is growing remarkably. With this survey, we aim to provide a wide and unifying overview on the topic. For such purpose, we propose a taxonomy of the existing approaches for fair classification under distribution shift, highlight benchmarking alternatives, point out the relation with other similar research fields and eventually, identify future venues of research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源