研究課題/領域番号 |
23K24856
|
補助金の研究課題番号 |
22H03600 (2022-2023)
|
研究種目 |
基盤研究(B)
|
配分区分 | 基金 (2024) 補助金 (2022-2023) |
応募区分 | 一般 |
審査区分 |
小区分60090:高性能計算関連
|
研究機関 | 国立研究開発法人理化学研究所 |
研究代表者 |
WAHIB MOHAMED 国立研究開発法人理化学研究所, 計算科学研究センター, チームリーダー (00650037)
|
研究分担者 |
ドローズド アレクサンドロ 国立研究開発法人理化学研究所, 計算科学研究センター, 研究員 (90740126)
|
研究期間 (年度) |
2022-04-01 – 2026-03-31
|
研究課題ステータス |
交付 (2024年度)
|
配分額 *注記 |
17,290千円 (直接経費: 13,300千円、間接経費: 3,990千円)
2025年度: 4,030千円 (直接経費: 3,100千円、間接経費: 930千円)
2024年度: 4,290千円 (直接経費: 3,300千円、間接経費: 990千円)
2023年度: 4,290千円 (直接経費: 3,300千円、間接経費: 990千円)
2022年度: 4,680千円 (直接経費: 3,600千円、間接経費: 1,080千円)
|
キーワード | Code Generation / Neural Networks / Intelligent Programming / HPC / Compilers / Machine Learning / GPUs / Numerical Methods |
研究開始時の研究の概要 |
Develop a machine learning framework for automated code generation and optimization, replacing manual porting. Utilize diverse datasets to train models for various hardware architectures. Validate, refine, and deploy the framework within a year for wide accessibility and usability.
|
研究実績の概要 |
In this fiscal year we developed an approach that automatically generated neural networks. Neural architecture search is an effective approach for automating the design of deep neural networks. Evolutionary computation (EC) is commonly used in Neural architecture search due to its global optimization capability. However, the evaluation phase of architecture candidates in EC-based NAS is compute-intensive, limiting its application for many real-world problems. To overcome this challenge, we proposed a novel progressive evaluation strategy for the evaluation phase in convolutional neural network architecture search, in which the number of training epochs of network individuals is progressively increased. Our proposed algorithm reduces the computational cost of the evaluation phase and promotes population diversity and fairness by preserving promising networks based on their distribution. We evaluated the proposed progressive evaluation and sub-population preservation of neural architecture search (PEPNAS) algorithm on the CIFAR10, CIFAR100, and ImageNet benchmark datasets, and compare it with 36 state-of-the-art algorithms, including manually designed networks, reinforcement learning (RL) algorithms, gradient-based algorithms, and other EC-based ones. The experimental results demonstrate that PEP-NAS effectively identifies networks with competitive accuracy while also markedly improving the efficiency of the search process.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
The project is progressing as expected. We were capable of publishing several papers.
|
今後の研究の推進方策 |
Our plan for the next fiscal year is to incorporate our approach to auto-generate neural networks by proposing a progressive neural predictor that uses score-based sampling to improve the performance of the surrogate model with limited training data. Different from existing algorithms that rely on initial sample selection uses an online method to progressively select new samples of the surrogate model based on potential information from the previous search process. During the iterative process, the sampled scores are dynamically adjusted based on the prediction rankings in each round to keep track of good architectures, which gradually optimises the surrogate model. In this way, the processes of training the predictor and searching for architectures are jointly combined to improve the efficiency of sample utilization. In addition, the surrogate model with different degrees of training is assigned prediction confidence equal to the accuracy of the current stage.
|