2004 Fiscal Year Final Research Report Summary
A Study on Human-Robot Interface based on Uncertainty Modeling and Inference
Project/Area Number |
14550244
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent mechanics/Mechanical systems
|
Research Institution | CHUO UNIVERSITY |
Principal Investigator |
SAKANE Shigeyuki Chuo University, Faculty of Science and Engineering, Professor, 理工学部, 教授 (10276694)
|
Project Period (FY) |
2002 – 2004
|
Keywords | uncertainty modeling / human-robot interface / Bayesian network / human task guiding system / navigation assisting system / electric wheel chair / mobile robot localization / sensor planning |
Research Abstract |
Purpose of the research is to investigate human-robot interface, which takes into account various uncertainties contained in the task environment and human factors. We have developed two systems: (1) A human task. guiding system: To guide human-tasks, a mobile robot has to cope with various uncertainties in the environment and the robot sensors. We developed a method of sensor planning for a mobile robot localization problem. We represent causal relation between local sensing results, actions, and belief of the global localization using a Bayesian network. Initially, the structure of the Bayesian network is learned from the complete data of the environment using K2 algorithm combined with GA (genetic algorithm). In the execution phase, when the robot is kidnapped to some place, it plans an optimal sensing action by taking into account the trade-off between the sensing cost and the global localization belief, which is obtained by inference in the Bayesian network. We have validated the learning and planning algorithm by simulation and real robot experiments in an office environment. (2) Navigation assisting system: We have developed an intelligent wheel chair, which allows us to direct its movement using two methods, one to use hand (and finger) gesture and the other to use voice control. The first method uses an infrared camera to reliably extract hand region in the image. An image processor extracts principal axis of the region, as the hand direction. The second method is developed using a voce recognition subsystem named Julius which uses hidden Markov model for recognizing words directing the movement, such as 'Mae', 'Migi', 'Hidari', 'Tomare', etc. pronounced in Japanese. By using some criterion for the recognition results, we have conducted experiments to validate the human-robot interface using voice commands as well as gesture commands to control the wheel chair properly.
|
Research Products
(24 results)