2023 Fiscal Year Research-status Report
Event-Clock Hybrid Driven Reconfigurable Perception-Computation Technology
Project/Area Number |
22K21280
|
Research Institution | Nara Institute of Science and Technology |
Principal Investigator |
KAN YIRONG 奈良先端科学技術大学院大学, 先端科学技術研究科, 助教 (50963732)
|
Project Period (FY) |
2022-08-31 – 2025-03-31
|
Keywords | Reconfigurable Hardware / Stochastic Computing / Spiking Neural Network |
Outline of Annual Research Achievements |
This year, we developed and verified the following technologies: (1) Designed and implemented an ultra-compact calculation unit with temporal-spatial re-configurability by combining a novel bisection neural network topology with stochastic computing; (2) Proposed a non-deterministic training approach for memory-efficient stochastic computing neural networks (SCNN). By introducing a multiple parallel training strategy, we greatly compress the computational latency and memory overhead of SCNN; (3) Developed a low-latency spiking neural network (SNN) with improved temporal dynamics. By analyzing the temporal dynamic characteristics of SNN encoding, we realized a high accuracy SNN model using fewer time steps.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
Current research progress matches expectations. The main reasons are: (1) We implemented a computing platform with temporal-spatial reconfigurability through the combination of stochastic computing and bisection neural network;(2)The computational delay and memory overhead of stochastic computing neural networks are compressed through algorithm optimization;(3)A low-latency SNN model was developed via improved temporal dynamics. This year, three papers have been published at international conferences; one paper is currently being submitted to an international conference.
|
Strategy for Future Research Activity |
We plan to combine SNN and bisection neural network topology to realize fully parallel and reconfigurable SNN hardware. By introducing structured sparse synaptic connections in SNNs, the neuron computation and weight storage costs can be significantly reduced. Benefiting from the hardware-friendly symmetric SNN topology, the accelerator is flexibly configured into multiple classifiers without hardware redundancy to support various tasks. We will explore how to achieve the highest classification performance with minimal hardware cost in the future work.
|
Causes of Carryover |
We will purchase a multifunctional mobile equipment and connect it to an existing FPGA board to demonstrate image processing functions based on our technologies. In addition, we will cover international conference registration fees and travel expenses during next fiscal year.
|