Project/Area Number |
20H04247
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Review Section |
Basic Section 61030:Intelligent informatics-related
|
Research Institution | Institute of Physical and Chemical Research |
Principal Investigator |
Khan Emtiyaz 国立研究開発法人理化学研究所, 革新知能統合研究センター, チームリーダー (30858022)
|
Co-Investigator(Kenkyū-buntansha) |
Alquier Pierre 国立研究開発法人理化学研究所, 革新知能統合研究センター, 研究員 (10865645)
横田 理央 東京工業大学, 学術国際情報センター, 教授 (20760573)
|
Project Period (FY) |
2020-04-01 – 2023-03-31
|
Project Status |
Completed (Fiscal Year 2022)
|
Budget Amount *help |
¥18,200,000 (Direct Cost: ¥14,000,000、Indirect Cost: ¥4,200,000)
Fiscal Year 2022: ¥2,470,000 (Direct Cost: ¥1,900,000、Indirect Cost: ¥570,000)
Fiscal Year 2021: ¥3,640,000 (Direct Cost: ¥2,800,000、Indirect Cost: ¥840,000)
Fiscal Year 2020: ¥12,090,000 (Direct Cost: ¥9,300,000、Indirect Cost: ¥2,790,000)
|
Keywords | Continual learning / Bayesian deep learning / Lifelong learning / continual learning / deep learning / Deep Learning / Continual Learning / Bayesian principles / adaptation / lifelong learning / reinforcement learning / active learning |
Outline of Research at the Start |
By using Bayesian principles to “identify, memorize, and recall” useful past experiences during training, our goal is to design life-long learning AI systems. We expect our new methods to enable application of deep learning in more realistic settings than before.
|
Outline of Final Research Achievements |
Current deep learning method cannot learn continually, and can easily forget the past information seen a long time ago. We developed new methods for continual deep learning where we reduce the forgetting. We do so by identifying and reusing a memory of the past. We show that our methods are universal, that is, any method that work well must have similar properties to ours. Our method is scalable and can be applied in practical settings.
|
Academic Significance and Societal Importance of the Research Achievements |
Deep-learning methods require a huge amount of computing resources and also a lot of data. Our work reduces the dependencies on such resources. We aim to design AI systems that continue to learn and improve throughout their lifetime.
|