Human Interfaces through Mutual Assistance between Multiple Sensing Modalities and Actions based on Visual Understanding of the World
Project/Area Number |
14350127
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent mechanics/Mechanical systems
|
Research Institution | Saitama University |
Principal Investigator |
KUNO Yoshinori Saitama University, Faculty of Engineering, Professor, 工学部, 教授 (10252595)
|
Co-Investigator(Kenkyū-buntansha) |
NAKAMURA Akio Tokyo Denki University, Faculty of Engineering, Associate Professor, 工学部, 助教授 (00334152)
|
Project Period (FY) |
2002 – 2005
|
Project Status |
Completed (Fiscal Year 2005)
|
Budget Amount *help |
¥14,700,000 (Direct Cost: ¥14,700,000)
Fiscal Year 2005: ¥2,400,000 (Direct Cost: ¥2,400,000)
Fiscal Year 2004: ¥3,900,000 (Direct Cost: ¥3,900,000)
Fiscal Year 2003: ¥3,200,000 (Direct Cost: ¥3,200,000)
Fiscal Year 2002: ¥5,200,000 (Direct Cost: ¥5,200,000)
|
Keywords | computer vision / human interface / multimodal interface / speech understanding / gaze / service robot / eye contact / nonverbal behavior / マルチモーダルインタフェース / 省略 / 意図理解 |
Research Abstract |
We have investigated human interfaces for welfare service robots. Human users may ask the robots to get something that they need. Speech interfaces may be best suited for such robots. However, the robots need vision to understand utterances of the humans and to carry out the orders. If we are certain that the listeners share some visual information, we often omit or mention ambiguously things about it in our utterances. We have proposed a method of understanding speech with such ambiguities using computer vision. It tracks the human's gaze direction, detecting objects in the direction. It also recognizes the human's actions. Based on these bits of visual information, it understands the human's inexplicit utterances. Experimental results show that the method helps to realize human-friendly speech interfaces. Visual recognition of nonverbal behaviors is also required for user-friendly interfaces. In this project, we have worked on eye contact. Eye contact is an effective means of controlling communication for humans, such as in starting communication. It seems that we can make eye contact if we look at each other. However, this alone cannot complete eye contact. In addition, we need to be aware of being looked by each other. We have proposed a method of active eye contact for human-robot communication considering both conditions. The robot changes its facial expressions according to the observation results of the human to make eye contact. Although vision is important as mentioned, vision may not always work properly. We have proposed an interactive vision to solve this problem. When the vision system cannot achieve a task, the robot makes a speech to the user so that the natural response by the user can give helpful information for the robot's vision system.
|
Report
(5 results)
Research Products
(81 results)