论文标题

SecureFedyJ:一种用于联合学习的安全功能高斯协议

SecureFedYJ: a safe feature Gaussianization protocol for Federated Learning

论文作者

Marchand, Tanguy, Muzellec, Boris, Beguier, Constance, Terrail, Jean Ogier du, Andreux, Mathieu

论文摘要

Yeo-Johnson(YJ)转换是一种标准的参数化的一维一维转换,通常用于机器学习中的高斯特征。在本文中,我们研究了在隐私限制下将YJ转换应用于联合学习环境中的问题。我们第一次证明YJ负log-likelihies实际上是凸,这使我们能够通过指数搜索优化它。我们从数值上表明,所得算法比基于Brent最小化方法的最先进方法更稳定。在这种简单的算法和安全多方计算例程的基础上,我们提出了SecureFedyJ,这是一种联合算法,该算法执行汇总等效的YJ变换而没有泄漏更多的信息,而不是最终拟合参数。实际数据上的定量实验表明,除了安全外,我们的方法还可靠地将筒仓的功能以及数据汇总在一起,使其成为安全联合特征高斯高斯化的可行方法。

The Yeo-Johnson (YJ) transformation is a standard parametrized per-feature unidimensional transformation often used to Gaussianize features in machine learning. In this paper, we investigate the problem of applying the YJ transformation in a cross-silo Federated Learning setting under privacy constraints. For the first time, we prove that the YJ negative log-likelihood is in fact convex, which allows us to optimize it with exponential search. We numerically show that the resulting algorithm is more stable than the state-of-the-art approach based on the Brent minimization method. Building on this simple algorithm and Secure Multiparty Computation routines, we propose SecureFedYJ, a federated algorithm that performs a pooled-equivalent YJ transformation without leaking more information than the final fitted parameters do. Quantitative experiments on real data demonstrate that, in addition to being secure, our approach reliably normalizes features across silos as well as if data were pooled, making it a viable approach for safe federated feature Gaussianization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源