Budget Amount *help |
¥3,600,000 (Direct Cost: ¥3,600,000)
Fiscal Year 2004: ¥700,000 (Direct Cost: ¥700,000)
Fiscal Year 2003: ¥1,100,000 (Direct Cost: ¥1,100,000)
Fiscal Year 2002: ¥1,800,000 (Direct Cost: ¥1,800,000)
|
Research Abstract |
Purpose of the research is to investigate human-robot interface, which takes into account various uncertainties contained in the task environment and human factors. We have developed two systems: (1) A human task. guiding system: To guide human-tasks, a mobile robot has to cope with various uncertainties in the environment and the robot sensors. We developed a method of sensor planning for a mobile robot localization problem. We represent causal relation between local sensing results, actions, and belief of the global localization using a Bayesian network. Initially, the structure of the Bayesian network is learned from the complete data of the environment using K2 algorithm combined with GA (genetic algorithm). In the execution phase, when the robot is kidnapped to some place, it plans an optimal sensing action by taking into account the trade-off between the sensing cost and the global localization belief, which is obtained by inference in the Bayesian network. We have validated the learning and planning algorithm by simulation and real robot experiments in an office environment. (2) Navigation assisting system: We have developed an intelligent wheel chair, which allows us to direct its movement using two methods, one to use hand (and finger) gesture and the other to use voice control. The first method uses an infrared camera to reliably extract hand region in the image. An image processor extracts principal axis of the region, as the hand direction. The second method is developed using a voce recognition subsystem named Julius which uses hidden Markov model for recognizing words directing the movement, such as 'Mae', 'Migi', 'Hidari', 'Tomare', etc. pronounced in Japanese. By using some criterion for the recognition results, we have conducted experiments to validate the human-robot interface using voice commands as well as gesture commands to control the wheel chair properly.
|