2023 Fiscal Year Annual Research Report
Automated, Scalable, and Machine Learning-Driven Approach for Generating and Optimizing Scientific Application Codes
Project/Area Number |
22H03600
|
Allocation Type | Single-year Grants |
Research Institution | Institute of Physical and Chemical Research |
Principal Investigator |
WAHIB MOHAMED 国立研究開発法人理化学研究所, 計算科学研究センター, チームリーダー (00650037)
|
Co-Investigator(Kenkyū-buntansha) |
ドローズド アレクサンドロ 国立研究開発法人理化学研究所, 計算科学研究センター, 研究員 (90740126)
|
Project Period (FY) |
2022-04-01 – 2026-03-31
|
Keywords | Neural Networks |
Outline of Annual Research Achievements |
In this fiscal year we developed an approach that automatically generated neural networks. Neural architecture search is an effective approach for automating the design of deep neural networks. Evolutionary computation (EC) is commonly used in Neural architecture search due to its global optimization capability. However, the evaluation phase of architecture candidates in EC-based NAS is compute-intensive, limiting its application for many real-world problems. To overcome this challenge, we proposed a novel progressive evaluation strategy for the evaluation phase in convolutional neural network architecture search, in which the number of training epochs of network individuals is progressively increased. Our proposed algorithm reduces the computational cost of the evaluation phase and promotes population diversity and fairness by preserving promising networks based on their distribution. We evaluated the proposed progressive evaluation and sub-population preservation of neural architecture search (PEPNAS) algorithm on the CIFAR10, CIFAR100, and ImageNet benchmark datasets, and compare it with 36 state-of-the-art algorithms, including manually designed networks, reinforcement learning (RL) algorithms, gradient-based algorithms, and other EC-based ones. The experimental results demonstrate that PEP-NAS effectively identifies networks with competitive accuracy while also markedly improving the efficiency of the search process.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
The project is progressing as expected. We were capable of publishing several papers.
|
Strategy for Future Research Activity |
Our plan for the next fiscal year is to incorporate our approach to auto-generate neural networks by proposing a progressive neural predictor that uses score-based sampling to improve the performance of the surrogate model with limited training data. Different from existing algorithms that rely on initial sample selection uses an online method to progressively select new samples of the surrogate model based on potential information from the previous search process. During the iterative process, the sampled scores are dynamically adjusted based on the prediction rankings in each round to keep track of good architectures, which gradually optimises the surrogate model. In this way, the processes of training the predictor and searching for architectures are jointly combined to improve the efficiency of sample utilization. In addition, the surrogate model with different degrees of training is assigned prediction confidence equal to the accuracy of the current stage.
|