Project/Area Number |
09555080
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 展開研究 |
Research Field |
Intelligent mechanics/Mechanical systems
|
Research Institution | Osaka University |
Principal Investigator |
KUNO Yoshinori Graduate School of Engneering, Osaka University, Associate Professor, 大学院・工学研究科, 助教授 (10252595)
|
Co-Investigator(Kenkyū-buntansha) |
SHIMADA Nobutaka Graduate School of Engineering, Osaka University, Research Associate, 大学院・工学研究科, 助手 (10294034)
MIURA Jun Graduate School of Engineering, Osaka University, Associate Professor, 大学院・工学研究科, 助教授 (90219585)
SHIRAI Yoshiaki Graduate School of Engineering, Osaka University, Professor, 大学院・工学研究科, 教授 (50206273)
SHIOHARA Morito Fujitsu Laboratories Ltd. Senior Researcher, マルチメディアシステム研究所, 主任研究員
|
Project Period (FY) |
1997 – 1999
|
Project Status |
Completed (Fiscal Year 1999)
|
Budget Amount *help |
¥12,800,000 (Direct Cost: ¥12,800,000)
Fiscal Year 1999: ¥4,100,000 (Direct Cost: ¥4,100,000)
Fiscal Year 1998: ¥4,000,000 (Direct Cost: ¥4,000,000)
Fiscal Year 1997: ¥4,700,000 (Direct Cost: ¥4,700,000)
|
Keywords | computer vision / human interface / virtual reality / gesture recognition / real-time system / face direction / intention understanding / wheelchair / 状態遷移図 |
Research Abstract |
Vision-based human interfaces recognizing nonverbal behaviors such as gestures and gaze from video images have attracted great interest. However, conventional such interfaces impose several unfriendly restrictions on the users. In addtion, they sometimes fail in their recognition. And once they fail, most of them will never recover from the failure. In this project, we have proposed to solve these issues by exploiting real-time close interaction between the user and the machine, and developed experimental systems to verify the effectiveness. First, we have relaxed restrictions on the user's position using active cameras to track the user. Then, we have eased restrictions on the user's unintentional hand motions. Conventional systems may erroneously recognize such motions as gestures intended to give commands. We use another nonverbal behavior, face direction, to choose only intentional motions. The assumption here is that the user is watching the target object when he/she wants to operate on it. Finally, we have proposed to use interaction by speech and action to recognize unknown gestures and to recover recognition failures. When the system is not certain of its recognition results, it shows the currentsituation by speech and/or action, which is the nonverbal behavior for the machine. Then it observes the user's reaction. Experimental results show that we can eventually pass our intention to the system by iterating such interaction. We have demonstrated the effectiveness of our approach by developing a system in which we can move objects in a virtual world by hand gestures. In addition, we have developed an intelligent wheelchair to examine whether or not such nonverbal interfaces are applicable to systems working in the real world. We use face direction to control the wheelchair. We can turn it by looking in the direction where we want to go. Experimental results confirm this as a good means of interface.
|