2022 Fiscal Year Research-status Report
Scalable Hybrid-parallelism Design for Mega-Size Deep Learning Model
Project/Area Number |
21K17751
|
Research Institution | National Institute of Advanced Industrial Science and Technology |
Principal Investigator |
Nguyen Truong 国立研究開発法人産業技術総合研究所, 情報・人間工学領域, 研究員 (60835346)
|
Project Period (FY) |
2021-04-01 – 2024-03-31
|
Keywords | Deep Learning / Large Scale / Distributed Computing / Non-IID |
Outline of Annual Research Achievements |
This year, we develop new methods to reduce the computing time by eliminating non-important samples during the training process (submitted to ICML2023). Through our previous work (IPDPS2022), we found that local shuffling could not achieve good accuracy in large-scale training due to non-iid data and overfitting issues. We deal with non-iid by assigning the impact factor for the models from different workers dynamically and use knowledge distillation for dealing with overfitting. The work is the Best Paper Award Finalist in CCGRID2023. We study the method to reduce the communication time by a co-design of collective communication algorithm and the intra-node network architecture (a Q1-journal JPDC is accepted) and the inter-node network architecture (poster at HPCA-Asia2023).
|
Current Status of Research Progress |
Current Status of Research Progress
1: Research has progressed more than it was originally planned.
Reason
We enlarge our international collaborative research with Telecom SudParis France, Hanoi University of Science and Technology (HUST) Vietnam, and VinUni-Illinois Smart Health Center VinUniversity Vietnam. The CCGRID2023 paper (PI is the corresponding author) is selected as one of the best paper award finalist papers (top 4 over 58 accepted papers, over 275 submitted papers). In the ICML2023 paper, empirical results on various large-scale datasets and models used directly in image classification and segmentation show that while the with-replacement importance sampling algorithm performs poorly on large datasets, our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline.
|
Strategy for Future Research Activity |
We continue to investigate (1) the extension of work on I/O to reduce the overhead of partial local shuffling at scale, and (2) the extension of the methods to reduce the computing time by eliminating non-important samples during the training process. We also study (3) the method to reduce communication time by applying the overlapping of communication and computation.
|
Causes of Carryover |
In the next fiscal year, we will conduct a wide range of large-scale experiment on supercomputer system. We will pay for using ABCI supercomputer
|
Research Products
(3 results)