Project/Area Number |
20K23348
|
Research Category |
Grant-in-Aid for Research Activity Start-up
|
Allocation Type | Multi-year Fund |
Review Section |
1002:Human informatics, applied informatics and related fields
|
Research Institution | Tokyo Metropolitan University |
Principal Investigator |
CHIN WEI HONG 東京都立大学, システムデザイン研究科, 特任助教 (10876650)
|
Project Period (FY) |
2020-09-11 – 2022-03-31
|
Project Status |
Completed (Fiscal Year 2021)
|
Budget Amount *help |
¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
Fiscal Year 2021: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
Fiscal Year 2020: ¥1,040,000 (Direct Cost: ¥800,000、Indirect Cost: ¥240,000)
|
Keywords | lifelong learning / topological map / continual learning / self organizing / active learning / memory neural network / cognitive robotics / self-organizing / incremental learning / topological network / robot navigation / episodic memory / semantic memory / unsupervised learning / deep learning / neural network / self-supervised learning |
Outline of Research at the Start |
This research propose a novel recurrent neural model that mimics human declarative memory system for lifelong learning. The research work constitute a basis for intelligent learning agents to acquire a higher level of cognitive capabilities for accomplishing real-world learning tasks.
|
Outline of Final Research Achievements |
Machine learning models perform well when given precisely structured, balanced, and homogenized data. However, when several jobs with incremental data are provided, the performance of the majority of these models suffers. Inspired by the Complementary Learning Systems (CLS) theory in neuroscience, episodic-semantic memory-based frameworks have received much attention and research. Conventional methods are needed to perform data batch normalization and are sensitive to vigilance hyperparameters across different datasets. I propose a Robust Growing Memory Network (RGMN) that continuously learns incoming data without normalization and is unlikely to be affected by the vigilance hyperparameter. The RGMN is a self-organizing topological network that models human episodic memory, and its network size can grow and shrink in response to data. The long-term memory buffer retains the largest and smallest data values that will use for learning.
|
Academic Significance and Societal Importance of the Research Achievements |
生涯学習は、計算機モデルや自律型エージェントにとって不可欠でありながら複雑な要素である。この分野での進歩は目覚しいが、既存の生涯学習モデルは、柔軟性、信頼性、拡張性の点で生物システムに大きく及ばない。正規化せずに入力データを継続的に学習し、パラメータ設定に頑健な人間のエピソード記憶をモデル化したRGMNを提案する。今後の課題として、より挑戦的なデータセットを用いて提案手法の有効性をさらに検証する予定です。また、人間のジェスチャー認識や行動分類などの時系列アプリケーションに、メモリネットワークの時空間接続性を利用することも将来の研究課題である。
|