论文标题

通过操作员分裂方法分散学习的交流压缩

Communication Compression for Decentralized Learning with Operator Splitting Methods

论文作者

Takezawa, Yuki, Niwa, Kenta, Yamada, Makoto

论文摘要

在分散的学习中,使用原始二偶配方(例如,边缘传感器学习(ECL))的操作员分裂方法已被证明对异质数据是可靠的,并且近年来引起了极大的关注。但是,在ECL中,一个节点需要与邻居交换双重变量。这些交换产生了巨大的沟通成本。对于基于八卦的算法,已经提出了许多压缩方法,但是当每个节点持有的数据分布在统计上是异质的,这些基于八卦的算法的性能不佳。在这项工作中,我们提出了ECL压缩方法的新框架,称为通信压缩ECL(C-ECL)。具体而言,我们重新重新重新制定ECL的更新公式,并建议压缩双重变量的更新值。我们在实验上证明,C-ECL可以实现比ECL更少的参数交换的几乎等效性能。此外,我们证明,与基于八卦的算法相比,C-ECL对异质数据更强大。

In decentralized learning, operator splitting methods using a primal-dual formulation (e.g., the Edge-Consensus Learning (ECL)) has been shown to be robust to heterogeneous data and has attracted significant attention in recent years. However, in the ECL, a node needs to exchange dual variables with its neighbors. These exchanges incur significant communication costs. For the Gossip-based algorithms, many compression methods have been proposed, but these Gossip-based algorithm do not perform well when the data distribution held by each node is statistically heterogeneous. In this work, we propose the novel framework of the compression methods for the ECL, called the Communication Compressed ECL (C-ECL). Specifically, we reformulate the update formulas of the ECL, and propose to compress the update values of the dual variables. We demonstrate experimentally that the C-ECL can achieve a nearly equivalent performance with fewer parameter exchanges than the ECL. Moreover, we demonstrate that the C-ECL is more robust to heterogeneous data than the Gossip-based algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源