研究課題/領域番号 |
22KF0142
|
補助金の研究課題番号 |
21F21768 (2021-2022)
|
研究種目 |
特別研究員奨励費
|
配分区分 | 基金 (2023) 補助金 (2021-2022) |
応募区分 | 外国 |
審査区分 |
小区分61030:知能情報学関連
|
研究機関 | お茶の水女子大学 |
研究代表者 |
オベル加藤 ナタナエル お茶の水女子大学, 基幹研究院, 講師 (10749659)
|
研究分担者 |
DA ROLD FEDERICO お茶の水女子大学, 基幹研究院, 外国人特別研究員
|
研究期間 (年度) |
2023-03-08 – 2025-03-31
|
研究課題ステータス |
交付 (2023年度)
|
配分額 *注記 |
2,200千円 (直接経費: 2,200千円)
2024年度: 400千円 (直接経費: 400千円)
2023年度: 449千円 (直接経費: 449千円)
2022年度: 1,100千円 (直接経費: 1,100千円)
2021年度: 700千円 (直接経費: 700千円)
|
キーワード | Synaptic Pruning / Quality-Diversity / Information theory / Network metrics / Evolutionary Strategy |
研究開始時の研究の概要 |
This project aims to develop a Quality-Diversity approach that dynamically prunes deep neural networks during exploration. The expected outcome of the project is an algorithmic framework for decreasing the number of parameters in deep learning models. We thus expect a decrease in energy consumption and computational requirements, with applications to embedded systems with low resources.
|
研究実績の概要 |
During this year, we used an exploratory approach based on Quality-Diversity (QD) algorithms to perform a systematic analysis of the effects of pruning in neural models during evolution. We relied on mathematical tools from network science to capture network structure regularities and information-theoretic analysis and describe the learning process. We focused on reinforcement learning problems (bipedal walker, lunar lander) and compared the results to a purely performance-based optimization technique. We also evaluated setups where a pruning operator evolves along the models. Results from the analysis showed the emergence of patterns and regimes in the mutual information and the estimation of network measures. This exploratory work will provide a solid ground for guiding and facilitating the development of pruning algorithms. A short paper describing our results has been accepted at the GECCO international conference, and a second one was submitted to the ALIFE conference.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
Research progressed according to plan. We implemented the exploration algorithm, applied it to standard benchmarks in the reinforcement learning field (bipedal walker, lunar lander), and analyzed the results. Across all metrics considered, we found some that correlated with an acceleration of the learning rate (Louvain, assortativity) and are great candidates for direct pruning algorithms, as we hoped. Results have been summarized in research papers and submitted to top international conferences.
|
今後の研究の推進方策 |
Future work will test the approach in supervised learning (LeCun et al., 2015), using computer vision tasks solved with widely used and tested DNN models trained with state-of-the-art learning methods (Khan et al., 2022). This would facilitate the assessment of our method as it enables a controlled comparison with other pruning methods. Avoiding reinforcement learning scenarios will likely facilitate the evaluation of the pruning algorithm during its development, as this class of problems is notoriously unstable and difficult to solve. We also plan to use a larger set of metrics, with the only constraint being that they apply to directed acyclic graphs without considering the weights of the edges. Another approach is to consider each layer pairing as a bipartite graph to expand the set of measures that are potential candidates for fitness functions in the pruning phase.
|