Proceedings of the 2024 2nd International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2024)

Improvements in GPipe Pipeline Parallel Acceleration: Choices, Constraints and Optimal Strategies of Micro-Batch

Authors
Riqian Hu1, *
1HDU-ITMO Joint Institute, Hangzhou Dianzi University, Hangzhou, 310000, China
*Corresponding author. Email: 20321114@hdu.edu.cn
Corresponding Author
Riqian Hu
Available Online 16 October 2024.
DOI
10.2991/978-94-6463-540-9_101How to use a DOI?
Keywords
Multi-card training; Pipeline parallelism; GPipe; Micro-batch
Abstract

As the scale of deep learning models continues to grow, large-scale models in machine vision and natural language processing (NLP) have achieved tremendous success. For instance, the current NLP giant GPT-3 has pushed the parameter count to the scale of billions. However, due to the significant surpassing of GPU physical memory limitations by large-scale deep neural networks, current strategies like data parallelism are no longer sufficient for model training. The latest pipeline parallelism strategies, such as the static layer partitioning of GPipe and PipeDream, as well as the dynamic layer partitioning of VPipe, have enabled model training segmentation and acceleration. In the current pipeline strategies like GPipe, the batch-splitting pipelining algorithm splits mini-batches on the same accelerator into overlapping computation stages, creating micro-batches to achieve pipelining. Users usually need to manually fine-tune the granularity of pipeline segmentation, i.e., micro-batch size (M), to determine the optimal value by observing changes in throughput. This article observes that M is not the smallest factor that affects throughput and proposes that batch size/micro-batch size (B/M) is the decisive factor that determines the changes in throughput. This article focuses on proving the rationality of B/M and quantitatively giving the selection range of B/M. For any given multi-GPU training scenario, by analyzing the optimal value of B/M in advance, the debugging cost can be reduced, and the throughput can be maximized quickly before training, thus accelerating the efficiency of multi-GPU parallelism.

Copyright
© 2024 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2024 2nd International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2024)
Series
Advances in Computer Science Research
Publication Date
16 October 2024
ISBN
978-94-6463-540-9
ISSN
2352-538X
DOI
10.2991/978-94-6463-540-9_101How to use a DOI?
Copyright
© 2024 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Riqian Hu
PY  - 2024
DA  - 2024/10/16
TI  - Improvements in GPipe Pipeline Parallel Acceleration: Choices, Constraints and Optimal Strategies of Micro-Batch
BT  - Proceedings of the 2024 2nd International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2024)
PB  - Atlantis Press
SP  - 1011
EP  - 1025
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6463-540-9_101
DO  - 10.2991/978-94-6463-540-9_101
ID  - Hu2024
ER  -