Visuospatial Episodic Memory based on Spiking Neural Networks using Temporal Coding and its Application to Robot Navigation
Project/Area Number |
13680466
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | Soka University |
Principal Investigator |
ATSUMI Masayasu Soka University, Faculty of Engineering, Associate Professor, 工学部, 助教授 (00192980)
|
Project Period (FY) |
2001 – 2002
|
Project Status |
Completed (Fiscal Year 2002)
|
Budget Amount *help |
¥3,400,000 (Direct Cost: ¥3,400,000)
Fiscal Year 2002: ¥1,700,000 (Direct Cost: ¥1,700,000)
Fiscal Year 2001: ¥1,700,000 (Direct Cost: ¥1,700,000)
|
Keywords | Scene recognition / Saliency / Visuospatial episodic memory / Working memory / Planning / Associative spiking neural network / Mobile robot / Competitive spiking neural network / スパイキングニューロン / 時間コーディング / 注意 / 競合ニューラルネットワーク / 連想回路 |
Research Abstract |
In this research, we have proposed a cognitive model on spiking neural networks using temporal coding in which scene sequences that are recognized based on saliency-based attention control are stored as visuospatial episodic memories and behavioral planning is executed based on their recall. Firstly, we have built a new model of scene recognition in which objects in saliency-based attended spots are encoded to be invariant with respect to position and size and also it encodes their position and size simultaneously. In this model, object recognition is performed based on fast learning in the growing two-layered competitive spiking neural network with reciprocal connection between the layers. Through simulation experiments of a Khepera robot with a camera, it has been confirmed that invariant object recognition with respect to position and size is achieved with a very high probability and also positions and sizes of objects are encoded suitably enough for scene recognition. As a result, we have concluded our model has enough performance for scene recognition. Secondly, as a model of episodic memory and planning on its recall, we have built an auto/hetero-associative spiking neural network combined with a working memory model, in which a state-driven forward sequence and a goal-driven backward sequence on the associative network are integrated in the working memory to make a plan. Through simulation experiments of robots route planning, we have confirmed firstly that our associative network can learn forward sequence and backward sequences simultaneously. Secondly, it has been confirmed that a plan is incrementally synthesized by repeating forward and backward sequence recall on the associative network and their integration in the working memory during subsequent theta cycles. Especially, it has been found that the goal-directed competition in sequence integration performs attention control for selecting one of several branches in planning.
|
Report
(3 results)
Research Products
(13 results)