2006 Fiscal Year Final Research Report Summary
Cooperation and competitive learning of multi-humanoid based on mapping other's behavior on the self behavior space
Project/Area Number |
16200012
|
Research Category |
Grant-in-Aid for Scientific Research (A)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Osaka University |
Principal Investigator |
ASADA Minoru Osaka University, Graduate School of Engineering, Professor, 大学院工学研究科, 教授 (60151031)
|
Co-Investigator(Kenkyū-buntansha) |
HOSADA Koh Osaka University, Graduated school of engineering, Associate professor, 大学院工学研究科, 助教授 (10252610)
TAKAHASHI Yasutake Osaka University, Graduated school of engineering, Assistant professor, 大学院工学研究科, 助手 (90324798)
|
Project Period (FY) |
2004 – 2006
|
Keywords | humanoid / body representation / imitation / intention estimation / language acquisition |
Research Abstract |
Recent progresses in developing humanoids have opened various kinds of research issues. However, the controls used in most of the studies are specified explicitly by designers. To overcome the limitation of the top-down design, this research project aims at developing the methodology of autonomous learning of multi-humanoids for adaptive behaviors from the view point of the development process of imitation. The projects has dealt with four topics. In the first one, that is "body representation", we proposed the learning model for the body representation in which the sensori-motor mappings are acquired through behavior experiences. Using the acquired mapping, the ball pass behaviors between two humanoids are realized. In the second one, "Recognition and Imitation of behavior", we proposed the learning model for mapping between the self and the other's behavior in posture, motion and behavior levels. After acquiring the mapping, the gesture communication is learned between a human and a
… More
humanoid. In the third one, "Estimation of other's intention", we proposed a novel method of inference of other agent's intention based on the state value estimation. The method does not need a precise world model or coordination transformation system to deal with view dependency. This study shows an observer can infer an intention of other not by precise object trajectory in Cartesian space but by the estimated state value transition during the observed behavior. In the force one, "Lexical acquisition", we proposed a lexical acquisition model which makes use of saliency to associate visual features of observed objects with the labels uttered by a caregiver. A robot changes its attention and learning rate based on the saliency. Simulation experiments showed that the learning model with saliency effectively associate the given labels with the observed features. Moreover, in the experiment with a real humanoid robot, the visual features are represented with self organizing maps which adaptively represents the shape of observed objects independent of the viewpoints. Less
|
Research Products
(43 results)