2021 Fiscal Year Research-status Report
Machine learning driven system level heterogeneous memory management for high-performance computing
Project/Area Number |
19K11993
|
Research Institution | Institute of Physical and Chemical Research |
Principal Investigator |
GEROFI BALAZS 国立研究開発法人理化学研究所, 計算科学研究センター, 上級研究員 (70633501)
|
Project Period (FY) |
2019-04-01 – 2023-03-31
|
Keywords | Memory access tracing / Runtime approximation / Distributed learning / Neural network training / I/O of deep learning |
Outline of Annual Research Achievements |
Results have been achieved in two parallel efforts of the project. We found that system-software-level heterogeneous memory management solutions utilizing machine learning, in particular nonsupervised learning-based methods such as reinforcement learning, require rapid estimation of execution runtime as a function of the data layout across memory devices for exploring different data placement strategies, which renders architecture-level simulators impractical for this purpose. We proposed a differential tracing-based approach using memory access traces obtained by high-frequency sampling-based methods (e.g., Intel's PEBS) on real hardware using of different memory devices. We developed a runtime estimator based on such traces that provides an execution time estimate orders of magnitude faster than full-system simulators. On a number of HPC mini applications we showed that the estimator predicts runtime with an average error of 4.4% compared to measurements on real hardware. For the deep learning data shuffling subtopic, we investigated the viability of partitioning the dataset among DL workers and performing only a partial distributed exchange of samples in each training epoch. Through extensive experiments on up to 2048 GPUs of ABCI and 4096 compute nodes of Fugaku, we demonstrated that in practice validation accuracy of global shuffling can be maintained when carefully tuning the partial distributed exchange. We provided an implementation in PyTorch that enables users to control the proposed data exchange scheme.
|
Current Status of Research Progress |
Current Status of Research Progress
1: Research has progressed more than it was originally planned.
Reason
We have made significant progress on two fronts of the project that was mainly achievable because of the successful collaboration with Argonne National Labs in the US, AIST in Japan and Telecom Sudparis in France. We foresee an additional two publications as the likely outcome of the overall effort.
|
Strategy for Future Research Activity |
For the two respective sub topics we plan to perform the following steps. With respect to the reinforcement learning based memory management topic, we are working on integrating out diferencial tracing based runtime estimator into the OpenAI "gym" environment framework which we are coupling with the PFRL reinforcement learning framework developed by Preferred Network in Japan. On the deep learning data shuffling and I/O optimization topic we are investigating the feasibility of importance sampling based input sample shuffling and its integration into the distributed learning scheme. In particular, early experiments show that importance sampling based data set decay, i.e., actively discarding input samples that are less important can lead to significant runtime improvements.
|
Causes of Carryover |
Not applicable.
|
Research Products
(2 results)