论文标题

通过隐式路径对齐的公平表示学习

Fair Representation Learning through Implicit Path Alignment

论文作者

Shui, Changjian, Chen, Qi, Li, Jiaqi, Wang, Boyu, Gagné, Christian

论文摘要

我们考虑了一个公平的表示学习的观点,在数据表示之上,最佳预测因子在不同的子组方面是不变的。具体而言,我们将此直觉提出为双层优化,其中在外环中学习了表示形式,并且在内环中更新了不变的最佳组预测指标。此外,提出的双层目标已被证明是为了实现足够的规则,这在各种实际情况下是可取的,但在公平学习中并未经常研究。此外,为避免在双层目标的内环中区分较高的计算和记忆成本,我们提出了一种隐式路径比对算法,该算法仅依赖于内部优化的解决方案和隐式分化的解决方案,而不是确切的优化路径。我们进一步分析了隐式方法的误差差距,并在分类和回归设置中验证了所提出的方法。实验结果表明,预测性能和公平度量的权衡始终如一。

We consider a fair representation learning perspective, where optimal predictors, on top of the data representation, are ensured to be invariant with respect to different sub-groups. Specifically, we formulate this intuition as a bi-level optimization, where the representation is learned in the outer-loop, and invariant optimal group predictors are updated in the inner-loop. Moreover, the proposed bi-level objective is demonstrated to fulfill the sufficiency rule, which is desirable in various practical scenarios but was not commonly studied in the fair learning. Besides, to avoid the high computational and memory cost of differentiating in the inner-loop of bi-level objective, we propose an implicit path alignment algorithm, which only relies on the solution of inner optimization and the implicit differentiation rather than the exact optimization path. We further analyze the error gap of the implicit approach and empirically validate the proposed method in both classification and regression settings. Experimental results show the consistently better trade-off in prediction performance and fairness measurement.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源