论文标题

DBS:分布式深神经网络训练的动态批次大小

DBS: Dynamic Batch Size For Distributed Deep Neural Network Training

论文作者

Ye, Qing, Zhou, Yuhao, Shi, Mingjia, Sun, Yanan, Lv, Jiancheng

论文摘要

与数据并行性的同步策略,例如同步随机性下降(S-SGD)和模型平均方法,是对深神经网络(DNN)的分布式培训广泛使用的,这在很大程度上归功于其具有ESEAS的实施,却有望实现。尤其是,群集的每个工人都有DNN的副本,并且在固定的迷你批量尺寸的数据集中,数据集的均值均匀分配,以保持DNNS收敛的培训。在Thestrategies中,具有不同计算能力的工人需要等待其他人,因为网络传输的同步和延迟,这将不可避免地导致高性能工人浪费计算。结果表明,集群的利用相对较低。为了减轻这一问题,我们提出了DNNS分布式训练的动态批量尺寸(DB)策略。具体而言,根据上一个时代中的事实对每个工人的性能进行评估,然后根据工人的当前性能,动态调整了批量的大小和数据集,从而改善了群集的利用率。为了验证拟议策略的效果,已经进行了广泛的实验,实验结果表明,提出的策略可以完全利用集群的性能,减少训练时间,并通过无关任务而良好地扰动良好的干扰。此外,提供了严格的理论分析,以证明拟议策略的融合。

Synchronous strategies with data parallelism, such as the Synchronous StochasticGradient Descent (S-SGD) and the model averaging methods, are widely utilizedin distributed training of Deep Neural Networks (DNNs), largely owing to itseasy implementation yet promising performance. Particularly, each worker ofthe cluster hosts a copy of the DNN and an evenly divided share of the datasetwith the fixed mini-batch size, to keep the training of DNNs convergence. In thestrategies, the workers with different computational capability, need to wait foreach other because of the synchronization and delays in network transmission,which will inevitably result in the high-performance workers wasting computation.Consequently, the utilization of the cluster is relatively low. To alleviate thisissue, we propose the Dynamic Batch Size (DBS) strategy for the distributedtraining of DNNs. Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster. To verify theeffectiveness of the proposed strategy, extensive experiments have been conducted,and the experimental results indicate that the proposed strategy can fully utilizethe performance of the cluster, reduce the training time, and have good robustnesswith disturbance by irrelevant tasks. Furthermore, rigorous theoretical analysis hasalso been provided to prove the convergence of the proposed strategy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源