Project/Area Number |
22K21280
|
Research Category |
Grant-in-Aid for Research Activity Start-up
|
Allocation Type | Multi-year Fund |
Review Section |
1001:Information science, computer engineering, and related fields
|
Research Institution | Nara Institute of Science and Technology |
Principal Investigator |
KAN YIRONG 奈良先端科学技術大学院大学, 先端科学技術研究科, 助教 (50963732)
|
Project Period (FY) |
2022-08-31 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥2,860,000 (Direct Cost: ¥2,200,000、Indirect Cost: ¥660,000)
Fiscal Year 2023: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2022: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
|
Keywords | Reconfigurable Hardware / Stochastic Computing / Spiking Neural Network / Reconfigurable Computing / CGRA / Neuromorphic Systems / Spiking Neural Networks / Hybrid Driven |
Outline of Research at the Start |
Future intelligent systems should not only be able to process information efficiently, but also be able to maintain continuous perception of the external environment. This research aims to develop a reconfigurable neuromorphic systems with adaptive perception-computation integration. By rationally merging adaptive spike representation, hybrid event-clock-driven neuron circuits and fully-parallel reconfigurable neural network architecture, low-power reconfigurable perception-computation integration for neuromorphic systems is expected to be achieved.
|
Outline of Annual Research Achievements |
This year, we developed and verified the following technologies: (1) Designed and implemented an ultra-compact calculation unit with temporal-spatial re-configurability by combining a novel bisection neural network topology with stochastic computing; (2) Proposed a non-deterministic training approach for memory-efficient stochastic computing neural networks (SCNN). By introducing a multiple parallel training strategy, we greatly compress the computational latency and memory overhead of SCNN; (3) Developed a low-latency spiking neural network (SNN) with improved temporal dynamics. By analyzing the temporal dynamic characteristics of SNN encoding, we realized a high accuracy SNN model using fewer time steps.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
Current research progress matches expectations. The main reasons are: (1) We implemented a computing platform with temporal-spatial reconfigurability through the combination of stochastic computing and bisection neural network;(2)The computational delay and memory overhead of stochastic computing neural networks are compressed through algorithm optimization;(3)A low-latency SNN model was developed via improved temporal dynamics. This year, three papers have been published at international conferences; one paper is currently being submitted to an international conference.
|
Strategy for Future Research Activity |
We plan to combine SNN and bisection neural network topology to realize fully parallel and reconfigurable SNN hardware. By introducing structured sparse synaptic connections in SNNs, the neuron computation and weight storage costs can be significantly reduced. Benefiting from the hardware-friendly symmetric SNN topology, the accelerator is flexibly configured into multiple classifiers without hardware redundancy to support various tasks. We will explore how to achieve the highest classification performance with minimal hardware cost in the future work.
|