Project/Area Number |
07650492
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
計測・制御工学
|
Research Institution | Osaka University |
Principal Investigator |
KUNO Yoshinori Osaka University, Faculty of Engineering, Associate Professor, 工学部, 助教授 (10252595)
|
Co-Investigator(Kenkyū-buntansha) |
MIURA Jun Osaka University, Faculty of Engineering, Research Associate, 工学部, 助手 (90219585)
SHIRAI Yoshiaki Osaka University, Faculty of Engineering, Professor, 工学部, 教授 (50206273)
|
Project Period (FY) |
1995 – 1996
|
Project Status |
Completed (Fiscal Year 1996)
|
Budget Amount *help |
¥2,300,000 (Direct Cost: ¥2,300,000)
Fiscal Year 1996: ¥400,000 (Direct Cost: ¥400,000)
Fiscal Year 1995: ¥1,900,000 (Direct Cost: ¥1,900,000)
|
Keywords | computer vision / human interface / gesture recognition / multiple view invariance / stereo / virtual reality / remote operation / human-centered / ヒューマンインタフェース / 空間指示 / 動作理解 / リアルタイム処理 / 空間認知 |
Research Abstract |
In this research, we have realized a human-centered human-computer interface system that enables us to move a target object in a 3D CG world or a real robot by moving our hand. Conventional systems have the following three issues from the human-centered viewpoint. 1.Need camera calibration. 2.Have a limited camera field of view. We need to keep our mind on our hand position so that it may not go outside the camera field of view. 3.Interpret hand motions only with respect to the reference frame fixed in the world. We sometimes, however, use the reference frame attached to our body in considering 3D spatial relation in such a way that, for example, the right direction is the direction of our right hand. We have proposed a vision algorithm using the multiple view affine invariance theory to solve the above issues. Then, we have develop a human interface system with a computer-controllable active camera system. Since we use invariance under viewing transformations, we do not need to know any camera parameters, thus requiring no camera calibration. This feature also allows the system to to move the cameras with a simple mechanism to keep the user in the field of view. Moreover, the system can interpret hand motions with respect to the world-fixed coordinate system and the human-centered one by taking the reference frame to obtain the invariants on an object fixed in the world and that on the user's body, respectively. The system can select an appropriate interpretation depending on the situation. Operation experiments confirm the usefulness of the system.
|