研究課題/領域番号 |
22K21280
|
研究種目 |
研究活動スタート支援
|
配分区分 | 基金 |
審査区分 |
1001:情報科学、情報工学およびその関連分野
|
研究機関 | 奈良先端科学技術大学院大学 |
研究代表者 |
KAN YIRONG 奈良先端科学技術大学院大学, 先端科学技術研究科, 助教 (50963732)
|
研究期間 (年度) |
2022-08-31 – 2025-03-31
|
研究課題ステータス |
交付 (2023年度)
|
配分額 *注記 |
2,860千円 (直接経費: 2,200千円、間接経費: 660千円)
2023年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2022年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
|
キーワード | Reconfigurable Hardware / Stochastic Computing / Spiking Neural Network / Reconfigurable Computing / CGRA / Neuromorphic Systems / Spiking Neural Networks / Hybrid Driven |
研究開始時の研究の概要 |
Future intelligent systems should not only be able to process information efficiently, but also be able to maintain continuous perception of the external environment. This research aims to develop a reconfigurable neuromorphic systems with adaptive perception-computation integration. By rationally merging adaptive spike representation, hybrid event-clock-driven neuron circuits and fully-parallel reconfigurable neural network architecture, low-power reconfigurable perception-computation integration for neuromorphic systems is expected to be achieved.
|
研究実績の概要 |
This year, we developed and verified the following technologies: (1) Designed and implemented an ultra-compact calculation unit with temporal-spatial re-configurability by combining a novel bisection neural network topology with stochastic computing; (2) Proposed a non-deterministic training approach for memory-efficient stochastic computing neural networks (SCNN). By introducing a multiple parallel training strategy, we greatly compress the computational latency and memory overhead of SCNN; (3) Developed a low-latency spiking neural network (SNN) with improved temporal dynamics. By analyzing the temporal dynamic characteristics of SNN encoding, we realized a high accuracy SNN model using fewer time steps.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
Current research progress matches expectations. The main reasons are: (1) We implemented a computing platform with temporal-spatial reconfigurability through the combination of stochastic computing and bisection neural network;(2)The computational delay and memory overhead of stochastic computing neural networks are compressed through algorithm optimization;(3)A low-latency SNN model was developed via improved temporal dynamics. This year, three papers have been published at international conferences; one paper is currently being submitted to an international conference.
|
今後の研究の推進方策 |
We plan to combine SNN and bisection neural network topology to realize fully parallel and reconfigurable SNN hardware. By introducing structured sparse synaptic connections in SNNs, the neuron computation and weight storage costs can be significantly reduced. Benefiting from the hardware-friendly symmetric SNN topology, the accelerator is flexibly configured into multiple classifiers without hardware redundancy to support various tasks. We will explore how to achieve the highest classification performance with minimal hardware cost in the future work.
|