2020 Fiscal Year Research-status Report
A Declarative Memory Neural Model for Continual Self-Supervised Learning of Intelligent Agents
Project/Area Number |
20K23348
|
Research Institution | Tokyo Metropolitan University |
Principal Investigator |
チン ウェイホン 東京都立大学, システムデザイン研究科, 特任助教 (10876650)
|
Project Period (FY) |
2020-09-11 – 2022-03-31
|
Keywords | lifelong learning / robot navigation / episodic memory / semantic memory / unsupervised learning / topological map |
Outline of Annual Research Achievements |
I have published a paper entitled Multichannel Recurrent Kernel Machines for Robot Episodic-Semantic map building in ICDL 2020, Chile. The proposed method comprises two memory layers: Episodic Memory and Semantic Memory layer. Each layer has one or more than one Infinite Echo State Network with a different learning task. The Episodic Memory layer incrementally clusters incoming sensory data as nodes and learns fine-grained spatiotemporal relationships of them. The Semantic Memory layer utilizes task-relevant cues to adjust the level of architectural flexibility and generate a topological semantic map that contains more compact episodic representations. The generated topological semantic map represents the memory of the robot for robot path planning and navigation.
|
Current Status of Research Progress |
Current Status of Research Progress
1: Research has progressed more than it was originally planned.
Reason
My research is progressing smoothly. Several experiments have been conducted with a real mobile robot (Roomba) in the indoor environment with different conditions such as lighting conditions, the number of pedestrians, and placement of furniture. The experimental results showed that the proposed method able to generate a topological map to represent the explored environment. I have published a conference paper and presented our research findings at ICDL 2020 conference.
|
Strategy for Future Research Activity |
For future work, I will extend the proposed method for autonomously navigate to the goal destination. Next, I will upgrade the robot by integrating more sensors and a better computer for multi-modal learning. Then, I plan to conduct long-term exploration in more challenging environments to further validate the lifelong learning ability of the proposed method. Finally, I will write a journal paper to publish the research findings.
|