Project/Area Number |
11308012
|
Research Category |
Grant-in-Aid for Scientific Research (A)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | NAGOYA UNIVERSITY |
Principal Investigator |
OHNISHI Noboru Nagoya University, Grad. School of Eng. Prof., 工学研究科, 教授 (70185338)
|
Co-Investigator(Kenkyū-buntansha) |
KUDO Hiroaki Nagoya University, Grad. School of Eng. Assoc. Prof., 工学研究科, 助教授 (70283421)
YAMAMURA Tsuyoshi Nagoya University, Aichi Prefectural Univ. Assoc. Prof., 情報科学部, 助教授 (00242826)
MUKAI Toshiharu Nagoya University, RIKEN, BMC Research Center Lab. Head, BMC研究センター, チームリーダー (80281632)
MATSUMOTO Tetsuya Nagoya University, Grad. School of Eng. Assoc. Prof., 工学研究科, 助手 (40252275)
TAKEUCHI Yoshinori Nagoya University, Grad. School of Eng. Assoc. Prof., 工学研究科, 助手 (60324464)
田中 敏光 名古屋大学, 大型計算機センター, 助教授 (00262923)
|
Project Period (FY) |
1999 – 2002
|
Project Status |
Completed (Fiscal Year 2002)
|
Budget Amount *help |
¥20,770,000 (Direct Cost: ¥19,600,000、Indirect Cost: ¥1,170,000)
Fiscal Year 2002: ¥2,080,000 (Direct Cost: ¥1,600,000、Indirect Cost: ¥480,000)
Fiscal Year 2001: ¥2,990,000 (Direct Cost: ¥2,300,000、Indirect Cost: ¥690,000)
Fiscal Year 2000: ¥6,700,000 (Direct Cost: ¥6,700,000)
Fiscal Year 1999: ¥9,000,000 (Direct Cost: ¥9,000,000)
|
Keywords | Sensor Fusion / Visual Sense / Auditory Sense / Grouping / Learning / Motion / Sound / 動作 / 触覚 |
Research Abstract |
A newborn baby can correlate mouth-movement with its speech, and acquire the ability of speaking based on this correspondence. In this study, we studied and developed novel methods for relating the events observed by different sensations such as visual and auditory senses, and the process of acquiring knowledge based on the obtained correspondence. First, we summarized the laws for grouping multi-modal events. Considering the laws, we developed a general cue for finding correspondence between visual and auditory events, i.e. cross-modal grouping, by using the similarity between motion and rhythm, and the simultaneity of onset and moving direction change. Furthermore we introduced spatio-temporal invariant for us to find correspondence even when sound emitting objects move in the environment. Next, we compared active and passive observations in knowledge acquisition. We found that the active observation is superior to the passive one because it can decrease information processing load due to selective attention and use feedback information that is the output of active operation toward the environment. We developed an active observation system. The system consists of an artificial head with a camera and a microphone, a manipulator as a hand acting to the external world and a computer. We developed and implemented software so that the system can deepen the understanding of the scene by iterating observation and motion. Finally, we conducted experiment in the real environment, where two sound emitting moving objects exist, we obtained satisfactory results showing the validity of the proposed method.
|