US-Byte: An Efficient Communication Framework for Scheduling Unequal-Sized Tensor Blocks in Distributed Deep Learning

Yunqi Gao, Bing Hu*, Mahdi Boloursaz Mashhadi, A-Long Jin, Pei Xiao, Chunming Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

The communication bottleneck severely constrains the scalability of distributed deep learning, and efficient communication scheduling accelerates distributed DNN training by overlapping computation and communication tasks. However, existing approaches based on tensor partitioning are not efficient and suffer from two challenges: 1) the fixed number of tensor blocks transferred in parallel can not necessarily minimize the communication overheads; 2) although the scheduling order that preferentially transmits tensor blocks close to the input layer can start forward propagation in the next iteration earlier, the shortest per-iteration time is not obtained. In this paper, we propose an efficient communication framework called US-Byte. It can schedule unequal-sized tensor blocks in a near-optimal order to minimize the training time. We build the mathematical model of US-Byte by two phases: 1) the overlap of gradient communication and backward propagation, and 2) the overlap of gradient communication and forward propagation. We theoretically derive the optimal solution for the second phase and efficiently solve the first phase with a low-complexity algorithm. We implement the US-Byte architecture on PyTorch framework. Extensive experiments on two different 8-node GPU clusters demonstrate that US-Byte can achieve up to 1.26x and 1.56x speedup compared to ByteScheduler and WFBP, respectively. We further exploit simulations of 128 GPUs to verify the potential scaling performance of US-Byte. Simulation results show that US-Byte can achieve up to 1.69x speedup compared to the state-of-the-art communication framework.

Original languageEnglish
Pages (from-to)123-139
Number of pages17
JournalIEEE Transactions on Parallel and Distributed Systems
Volume35
Issue number1
DOIs
Publication statusPublished - 1 Jan 2024
Externally publishedYes

Keywords

  • Communication scheduling
  • data parallelism
  • distributed deep learning
  • tensor fusion
  • tensor partitioning

Fingerprint

Dive into the research topics of 'US-Byte: An Efficient Communication Framework for Scheduling Unequal-Sized Tensor Blocks in Distributed Deep Learning'. Together they form a unique fingerprint.

Cite this