研究課題/領域番号 |
22K21280
|
研究種目 |
研究活動スタート支援
|
配分区分 | 基金 |
審査区分 |
1001:情報科学、情報工学およびその関連分野
|
研究機関 | 奈良先端科学技術大学院大学 |
研究代表者 |
KAN YIRONG 奈良先端科学技術大学院大学, 先端科学技術研究科, 助教 (50963732)
|
研究期間 (年度) |
2022-08-31 – 2024-03-31
|
研究課題ステータス |
交付 (2022年度)
|
配分額 *注記 |
2,860千円 (直接経費: 2,200千円、間接経費: 660千円)
2023年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2022年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
|
キーワード | Reconfigurable Computing / CGRA / Neuromorphic Systems / Spiking Neural Networks / Hybrid Driven |
研究開始時の研究の概要 |
Future intelligent systems should not only be able to process information efficiently, but also be able to maintain continuous perception of the external environment. This research aims to develop a reconfigurable neuromorphic systems with adaptive perception-computation integration. By rationally merging adaptive spike representation, hybrid event-clock-driven neuron circuits and fully-parallel reconfigurable neural network architecture, low-power reconfigurable perception-computation integration for neuromorphic systems is expected to be achieved.
|
研究実績の概要 |
This year, we developed and verified the following technologies for the hybrid-driven reconfigurable perception-computation platform: (1) Spike coding of Electroencephalogram (EEG) signals and its spiking neural network (SNN)-based processing. In several works, we successfully applied spike coding to adaptive, stochastic and frequency coding of EEG signals, respectively, and achieved competitive sleep stage classification accuracy based on SNN; (2) A ternary weight quantization method for deep SNNs and hardware implementation. In this work, we achieved energy-efficient inference hardware by quantizing the weights of SNNs to {-1, 0, 1}. The gradient disappearance problem during model training is avoided by designing cross-layer connections. Simple logical operations can be used in ternary weights SNNs at the inference stage, to reducing hardware overhead; (3) Training and construction mechanism of reconfigurable bisection neural network (BNN) topology. We proposed a general construction method of BNN and its training mechanism. By constructing a mask matrix with a bisection structure, we can automatically train a BNN model with a specific topology.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
Current research progress matches expectations. The main reasons are: (1) spike coding works well on time series data (such as EEG signals); (2) We already have a foundation in the efficient hardware implementation of SNNs; (3) Completed the theoretical basis of reconfigurable NNs. 3 papers have been published in international journals; 4 papers have been published in international conferences; and currently 2 papers are being submitted to international conferences.
|
今後の研究の推進方策 |
Firstly, we will integrate SNN with bisection topology to realize reconfigurable SNN hardware. Then, the adders and multipliers in the original SNN hardware are replaced with look-up tables to realize low-power calculations. Secondly, we will explore the integration of stochastic computing and BNN to realize a computing architecture with temporal-spatial re-configurability. Finally, we apply the proposed platform to various online perception/computation applications.
|