2023 Fiscal Year Final Research Report
Acceleration framework for training deep learning by cooperative with algorithms and computer architectures
Project/Area Number |
21K17768
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61010:Perceptual information processing-related
|
Research Institution | Tokyo University of Science |
Principal Investigator |
Maeda Yoshihiro 東京理科大学, 工学部電気工学科, 講師 (80843375)
|
Project Period (FY) |
2021-04-01 – 2024-03-31
|
Keywords | 深層学習 / 計算機アーキテクチャ / 高能率計算 / 高速化 |
Outline of Final Research Achievements |
In this research, we aimed to accelerate the training process of deep neural networks (DNNs) used in various fields. We focused on DNN optimization techniques such as pruning and quantization, which help simplify DNN models. Considering computer architecture viewpoints, our study explored how these techniques can be applied during training. We found that pruning and quantization-based algorithms can accelerate the training process for DNN by leveraging computer architecture.
|
Free Research Field |
画像処理
|
Academic Significance and Societal Importance of the Research Achievements |
本研究は, Deep neural network(DNN)の学習の高速化について検討をするものである.DNNは,画像処理分野では物体認識や超解像など様々なコンピュータビジョンにおけるタスクの更なる高精度化を実現している.DNNの活躍は画像処理分野だけにとどまらず,様々な研究領域での活用や産業界においても商用利用されている.本研究によって学習の高速化が実現でき,更なる活用の幅が広がるものである.
|