Project/Area Number |
16200020
|
Research Category |
Grant-in-Aid for Scientific Research (A)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Cognitive science
|
Research Institution | Kyoto University |
Principal Investigator |
INUI Toshio Kyoto University, Graduate School of Informatics, Professor (30107015)
|
Co-Investigator(Kenkyū-buntansha) |
SAIKI Jun Kyoto University, Graduate School of Human and Environmental Studies, Professor (60283470)
SUGIO Takeshi Doshisya University, Faculty of Culture and information Science, Lecturer (60335205)
SASAOKA Takafumi Kyoto University, Graduate School of Informatics, Assistant Professor (60367456)
田中 茂樹 仁愛大学, 人間学部, 助教授 (70340031)
|
Project Period (FY) |
2004 – 2007
|
Project Status |
Completed (Fiscal Year 2007)
|
Budget Amount *help |
¥49,920,000 (Direct Cost: ¥38,400,000、Indirect Cost: ¥11,520,000)
Fiscal Year 2007: ¥5,330,000 (Direct Cost: ¥4,100,000、Indirect Cost: ¥1,230,000)
Fiscal Year 2006: ¥11,700,000 (Direct Cost: ¥9,000,000、Indirect Cost: ¥2,700,000)
Fiscal Year 2005: ¥13,650,000 (Direct Cost: ¥10,500,000、Indirect Cost: ¥3,150,000)
Fiscal Year 2004: ¥19,240,000 (Direct Cost: ¥14,800,000、Indirect Cost: ¥4,440,000)
|
Keywords | multimodal / information integration / reach-to-grasp / vision / motor command / 触覚 / 触角 |
Research Abstract |
Interactions between hand movements and visual feedback are investigated. Results suggest that the central nervous system controls movements based on the prediction or the estimation of consequent motor accuracy taking into account both sensory and motor error. It is also suggested that the pre-supplementary motor area (pre-SMA), the right posterior parietal cortex (PPC), and the right temporo-parietal junction (TPJ) are involved in internal movement prediction, online evaluation of visuomotor error, and movement estimation to compensate for visual feedback delay, respectively. Experiments on cross-modal integration between vision and haptics with dynamic objects revealed that representation of dynamics as a weighted linear summation of sensory cues mediates cross-modal integration, and that temporal cues used in integration is independent of those used in simultaneity judgments. Experiments on cross-modal object recognition showed that recognition of 3-D objects within single modality
… More
utilized both viewpoint-dependent and viewpoint-invariant information, whereas object recognition by a modality not used in learning exclusively utilized viewpoint-invariant information. It was found that the motor program for a familiar object is highly activated during the execution of the action in conventional manner, while the role of visual feedback is more dominant for an unfamiliar action. These results suggest that the motor representation for a familiar object is based on the previous interactions between the part of the object to be grasped and a particular hand shape. It was experimentally shown that active exploration of object views using a track ball facilitates object recognition. It suggested that a rule for view transformation of objects was learned by the interaction between information provided by hand movements from the motor system and visual information. Moreover, it was demonstrated that active exploration leads to a significant decrease of the amplitude of the magnetic field in the left hemisphere at the time period when the current dipole was estimated in the left parietal cortex. Less
|