论文标题

分布式随机梯度下降,具有成本敏感和战略代理

Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents

论文作者

Akbay, Abdullah Basar, Tepedelenlioglu, Cihan

论文摘要

这项研究考虑了一种联合学习设置,其中成本敏感和战略代理使用服务器训练学习模型。在每个回合中,每个代理商都会对培训数据进行采样,并发送其梯度更新。作为他的Minibatch尺寸选择的越来越多的功能,代理会产生与数据收集,梯度计算和通信相关的成本。代理商可以自由选择其Minibatch尺寸,甚至可以选择退出培训。为了降低成本,代理商可能会降低他的Minibatch尺寸,这也可能导致梯度更新的噪音水平提高。服务器可以提供奖励来补偿代理商的成本并激励他们的参与,但她缺乏验证代理商的真实Minibatch尺寸的能力。为了应对这一挑战,拟议的奖励机制根据其与其他代理商提供的梯度构建的参考的距离评估了每个代理梯度的质量。结果表明,所提出的奖励机制具有合作的NASH平衡,在该平衡中,代理商根据服务器的要求确定MiniBatch尺寸选择。

This study considers a federated learning setup where cost-sensitive and strategic agents train a learning model with a server. During each round, each agent samples a minibatch of training data and sends his gradient update. As an increasing function of his minibatch size choice, the agent incurs a cost associated with the data collection, gradient computation and communication. The agents have the freedom to choose their minibatch size and may even opt out from training. To reduce his cost, an agent may diminish his minibatch size, which may also cause an increase in the noise level of the gradient update. The server can offer rewards to compensate the agents for their costs and to incentivize their participation but she lacks the capability of validating the true minibatch sizes of the agents. To tackle this challenge, the proposed reward mechanism evaluates the quality of each agent's gradient according to the its distance to a reference which is constructed from the gradients provided by other agents. It is shown that the proposed reward mechanism has a cooperative Nash equilibrium in which the agents determine the minibatch size choices according to the requests of the server.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源