2007 Fiscal Year Final Research Report Summary
The computation of multimodal representation through dynamic interaction
Project/Area Number |
16200020
|
Research Category |
Grant-in-Aid for Scientific Research (A)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Cognitive science
|
Research Institution | Kyoto University |
Principal Investigator |
INUI Toshio Kyoto University, Graduate School of Informatics, Professor (30107015)
|
Co-Investigator(Kenkyū-buntansha) |
SAIKI Jun Kyoto University, Graduate School of Human and Environmental Studies, Professor (60283470)
SUGIO Takeshi Doshisya University, Faculty of Culture and information Science, Lecturer (60335205)
SASAOKA Takafumi Kyoto University, Graduate School of Informatics, Assistant Professor (60367456)
|
Project Period (FY) |
2004 – 2007
|
Keywords | multimodal / information integration / reach-to-grasp / vision / motor command |
Research Abstract |
Interactions between hand movements and visual feedback are investigated. Results suggest that the central nervous system controls movements based on the prediction or the estimation of consequent motor accuracy taking into account both sensory and motor error. It is also suggested that the pre-supplementary motor area (pre-SMA), the right posterior parietal cortex (PPC), and the right temporo-parietal junction (TPJ) are involved in internal movement prediction, online evaluation of visuomotor error, and movement estimation to compensate for visual feedback delay, respectively. Experiments on cross-modal integration between vision and haptics with dynamic objects revealed that representation of dynamics as a weighted linear summation of sensory cues mediates cross-modal integration, and that temporal cues used in integration is independent of those used in simultaneity judgments. Experiments on cross-modal object recognition showed that recognition of 3-D objects within single modality
… More
utilized both viewpoint-dependent and viewpoint-invariant information, whereas object recognition by a modality not used in learning exclusively utilized viewpoint-invariant information. It was found that the motor program for a familiar object is highly activated during the execution of the action in conventional manner, while the role of visual feedback is more dominant for an unfamiliar action. These results suggest that the motor representation for a familiar object is based on the previous interactions between the part of the object to be grasped and a particular hand shape. It was experimentally shown that active exploration of object views using a track ball facilitates object recognition. It suggested that a rule for view transformation of objects was learned by the interaction between information provided by hand movements from the motor system and visual information. Moreover, it was demonstrated that active exploration leads to a significant decrease of the amplitude of the magnetic field in the left hemisphere at the time period when the current dipole was estimated in the left parietal cortex. Less
|
Research Products
(56 results)