2020 Fiscal Year Annual Research Report
Life-Long Deep Learning using Bayesian Principles
Project/Area Number |
20H04247
|
Research Institution | Institute of Physical and Chemical Research |
Principal Investigator |
Khan Emtiyaz 国立研究開発法人理化学研究所, 革新知能統合研究センター, チームリーダー (30858022)
|
Co-Investigator(Kenkyū-buntansha) |
Alquier Pierre 国立研究開発法人理化学研究所, 革新知能統合研究センター, 研究員 (10865645)
横田 理央 東京工業大学, 学術国際情報センター, 准教授 (20760573)
|
Project Period (FY) |
2020-04-01 – 2023-03-31
|
Keywords | continual learning / Bayesian principles / deep learning |
Outline of Annual Research Achievements |
Our goal was to design AI systems that continue to learn and improve throughout their lifetime. Deep-learning models, when trained this way, catastrophically forget the past and fail. This fiscal year we worked mainly on continual learning and wrote one research paper on this topic published at NeurIPS 2020 which was accepted as an oral presentation (105 out of 9454 submissions)
- Continual Deep Learning by Functional Regularisation of Memorable Past (NeurIPS 2020) P. Pan*, S. Swaroop*, A. Immer, R. Eschenhagen, R. E. Turner, M.E. Khan
|
Current Status of Research Progress |
Current Status of Research Progress
1: Research has progressed more than it was originally planned.
Reason
We were able to proceed in the way we decided and were able to write one paper which was received well and accepted as an oral presentation at reputed conference.
|
Strategy for Future Research Activity |
We will continue working on continual learning towards our ultimate goal to scale this up to ImageNet. Before we can do that, we are going to try to understand one-task adaptation, and then do a large experiment for at a larger scale.
|
-
[Journal Article] Continual deep learning by functional regularization of the memorable past2020
Author(s)
Pan, P., Swaroop, S., Immer, A., Eschenhagen, R., Turner, R. and Khan, M. E.
-
Journal Title
Advances in Neural Information Processing Systems
Volume: 374
Pages: 4453-4464
Peer Reviewed / Open Access / Int'l Joint Research