MAEDA Sakashi Fukuoka Univ., Faculty of Engineering, Research Associate, 工学部, 助手 (90330901)
MORIMOTO Tsuyoshi Fukuoka Univ., Faculty of Engineering, Professor, 工学部, 教授 (10309891)
|Budget Amount *help
¥1,700,000 (Direct Cost : ¥1,700,000)
Fiscal Year 2003 : ¥800,000 (Direct Cost : ¥800,000)
Fiscal Year 2002 : ¥900,000 (Direct Cost : ¥900,000)
This research developed a framework of user's intention understanding using multi-modal information, and enabled natural and robust, intelligent man-machine interface.
In generally, intention understanding processes are constructed from following three stages. (A) Tracking of humans walking around the system. (B) Detection of a human coming up to the system, and confirmation of one's intention of use the system. (C) User's intention understanding for man-machine dialogue. Traditional researches focused on only one of those stages but not transition between stages. Therefore, natural dialogue interface is not developed yet.
This research focused on (B), (C) and transitions between them and got following three results, while results of traditional researches were used for (A). (1) When the system detects a human coming up to, the system gathers information using a new active-vision method and confirm his/her intention implicitly. This implicit confirmation enables natural transitions from (B) to (C). (2) In the stage (C), a combination of a vision based lip-reading and a. context analysis with the traditional spoken language recognition, which enables high recognition accuracy, was proposed. Using the proposed methods for (B) and (C), a very robust dialogue system could be developed. (3) The recognition accuracy for dialogues, however, was very high but not perfect. Therefore, a touch-panel device and menus on it were additionally introduced, and a new modal switching method was proposed. Using this method, user can communicate with the system using audio-visual dialogue as frequently as possible under being premised on a perfect success.