论文标题

Baygo:联合贝叶斯学习和信息感知图优化

BayGo: Joint Bayesian Learning and Information-Aware Graph Optimization

论文作者

Alshammari, Tamara, Samarakoon, Sumudu, Elgabli, Anis, Bennis, Mehdi

论文摘要

本文讨论了分布式机器学习的问题,在该问题中,代理商根据其本地数据集更新其模型,并以完全分散的方式进行了协作和完全分散的方式汇总更新的模型。在本文中,我们解决了在多代理网络中出现的信息异质性问题,在多代理网络中,信息代理的放置在学习动力学中起着至关重要的作用。具体而言,我们提出了Baygo,这是一种新型的完全分散的关节贝叶斯学习和图形优化框架,在稀疏图上经过证实的快速收敛。在我们的框架下,代理商能够与他们自己学习的最有用的代理人学习和交流。与先前的工作不同,我们的框架没有对跨代理的数据分布的先验知识,也没有对系统的真实参数进行任何了解。提出的基于最小化的交替框架可确保以完全分散的方式确保全局连接性,同时最小化通信链接的数量。从理论上讲,通过优化提出的目标函数,后验概率分布的估计误差在每次迭代时呈指数下降。通过广泛的模拟,我们表明,与完全连接和星形拓扑图相比,我们的框架可实现更快的收敛性和更高的准确性。

This article deals with the problem of distributed machine learning, in which agents update their models based on their local datasets, and aggregate the updated models collaboratively and in a fully decentralized manner. In this paper, we tackle the problem of information heterogeneity arising in multi-agent networks where the placement of informative agents plays a crucial role in the learning dynamics. Specifically, we propose BayGo, a novel fully decentralized joint Bayesian learning and graph optimization framework with proven fast convergence over a sparse graph. Under our framework, agents are able to learn and communicate with the most informative agent to their own learning. Unlike prior works, our framework assumes no prior knowledge of the data distribution across agents nor does it assume any knowledge of the true parameter of the system. The proposed alternating minimization based framework ensures global connectivity in a fully decentralized way while minimizing the number of communication links. We theoretically show that by optimizing the proposed objective function, the estimation error of the posterior probability distribution decreases exponentially at each iteration. Via extensive simulations, we show that our framework achieves faster convergence and higher accuracy compared to fully-connected and star topology graphs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源