论文标题

完全分散的,可扩展的高斯流程,用于多机构联合学习

Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning

论文作者

Kontoudis, George P., Stilwell, Daniel J.

论文摘要

在本文中,我们提出了用于高斯过程(GP)培训和多代理系统预测的分散和可扩展算法。为了分散GP培训优化算法的实施,我们采用了乘数的交替方向方法(ADMM)。为GP高参数训练提供了分散的近端ADMM的封闭式解决方案,并具有最大的似然估计。通过使用迭代和共识方法,用于GP预测的多种聚集技术被分散。此外,我们提出了一个基于协方差的最近的邻居选择策略,该策略使一部分代理能够执行预测。通过有关合成和真实数据的数值实验,说明了所提出方法的功效。

In this paper, we propose decentralized and scalable algorithms for Gaussian process (GP) training and prediction in multi-agent systems. To decentralize the implementation of GP training optimization algorithms, we employ the alternating direction method of multipliers (ADMM). A closed-form solution of the decentralized proximal ADMM is provided for the case of GP hyper-parameter training with maximum likelihood estimation. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. In addition, we propose a covariance-based nearest neighbor selection strategy that enables a subset of agents to perform predictions. The efficacy of the proposed methods is illustrated with numerical experiments on synthetic and real data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源