2023 Fiscal Year Final Research Report
An SRAM Computing in Memory to Exploit Energy Efficient Dimensional Separable Compact Machine Learning Model
Project/Area Number |
21K11818
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Review Section |
Basic Section 60040:Computer system-related
|
Research Institution | Fukuoka Institute of Technology |
Principal Investigator |
|
Project Period (FY) |
2021-04-01 – 2024-03-31
|
Keywords | どこでもAIに向けての省電力機械学習エンジン / Computing in Memory |
Outline of Final Research Achievements |
This research aims to reduce unnecessary calculations in CNN models, and the dimensionality reduction in the channel direction is 1) the most efficient means of reducing parameters without reducing accuracy, and 2) also contributes in terms of hardware implementation. The study results showed that it is relatively easy to prune channels with low frequency. In this study, we used the SE-ResNet Attention algorithm and demonstrated from experimental results that it is possible to speed up the learning curve by using a binary method in the process of determining the feature map to be attended to. We clarified that the average channel reduction rate is approximately 50% in three blocks of ResNet14 configuration, which is a uniquely compact ResNet configuration.
|
Free Research Field |
Computing in Memory (CIM)、どこでもAIに向けての省電力機械学習エンジン
|
Academic Significance and Societal Importance of the Research Achievements |
機械学習用のMemoryとComputingの二役をこなす Computing in Memory (CIM)が電力削減を目的に注目されている。しかし現状のCIM構造ではスパース領域を「電力削減制御や、逆に精度補償用の多値量子化」に活用できない。本研究は、アーキテクチャによる次元分離に欠かせないCIMを開発し、得られたスパース領域を実際の動作停止/電源遮断につなげる。省電力化に結びつかないスパース領域は多値表現に適応的に活用し2値化による精度劣化を補償するなど、精度と電力削減のトレードオフの問題を解決する。結果、時代が求めている「どこでもAIに向けての省電力機械学習エンジン」を可能にする。
|