Synergic Attention-based Visuospatial Episodic Memory and its Application to Wearable Navigation
Project/Area Number |
15500075
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Media informatics/Database
|
Research Institution | Soka University |
Principal Investigator |
ATSUMI Masayasu Soka University, Faculty of Engineering, Associate Professor, 工学部, 助教授 (00192980)
|
Project Period (FY) |
2003 – 2005
|
Project Status |
Completed (Fiscal Year 2005)
|
Budget Amount *help |
¥3,700,000 (Direct Cost: ¥3,700,000)
Fiscal Year 2005: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 2004: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 2003: ¥1,700,000 (Direct Cost: ¥1,700,000)
|
Keywords | Attention / Visuospatial episode / Competitive neural network / Self-organized learning / Speech interaction / User model / Bayesian network / Information providing / 注意制御 / シーン符号化 / 音声情報提供 / ウェアラブルシステム / シーン記憶 / 視空間エピソード記憶 / 自己組織学習 |
Research Abstract |
In this research, we proposed fundamental methods for the partner machines such as familiar computers and robots that support daily activities of human owners by paying attention to their daily space and sharing attention and concern with them. These methods include representation and control of synergic attention between an owner and his/her partner machine, learning of attention structure based on the competitive neural network, and Bayesian network-based personal modeling and real-world information navigation using visual attention and response utterance of an owner. As for synergic attention, we built a method for modulating saliency-driven attention by active spatial attention induced by owner's concern. Then we built an attention transition and control method and proposed the synergic attention control method between an owner and his/her partner machine. As for learning of attention structure, we built a model of encoding a set of attended spots in a certain scene as an attention
… More
structure code. This model consists of the competitive neural network named the COGNET, which encodes objects included in attended spots, and the encoding mechanism of sizes and positions of the attended spots. Through experimental evaluation of the COGNET's main features, that are the fast self-organized learning and the glance recognition of objects in attended spots, it was confirmed that attention structure codes were useful enough for encoding visuospatial episodes. As for personal modeling and information providing based on it, we built a Bayesian network-based personal model of an owner and architecture of a partner machine equipped with it as an inference model. The personal model infers owner's concern from attended object codes, utterance response by impression words and context around the owner, and recommends what information satisfies his/her information needs. Through experiments using an interactive speech-based news provider system with a Bayesian network-based personal model of an owner, it was confirmed that the personal model made information providing personalized for the owner. Less
|
Report
(4 results)
Research Products
(19 results)