1996 Fiscal Year Final Research Report Summary
A Study on automatic construction of virtual space from natural scenes and non-contact access
Project/Area Number |
07555119
|
Research Category |
Grant-in-Aid for Scientific Research (A)
|
Allocation Type | Single-year Grants |
Section | 試験 |
Research Field |
情報通信工学
|
Research Institution | Nagoya University |
Principal Investigator |
TANIMOTO Masayuki Nagoya University, Faculty of Engineering, Professor, 工学部, 教授 (30109293)
|
Co-Investigator(Kenkyū-buntansha) |
MATSUDA Kiichi Fujitsu laboratories, Ltd., Media Processing Lab., Manager, Researcher, メディア処理研究部・部長, 研究員
|
Project Period (FY) |
1995 – 1996
|
Keywords | three dimension scene / virtual space / multi-viewpoint image set / depth information / non-contact man-machine interface / virtual space access / gesture interface / analysis of three dimensional motion |
Research Abstract |
We proposed and constructed the "3D editor" with which we can access the virtual space composed from a natural scene in the computer. We developed the following three algorithm as key technologies of this system. (1) Construction of virtual space from natural scenes We developed a system with which the depth information of three dimensional natural scenes is acquired from a multi-viewpoint image set which is input through several cameras placed (or one camera moving) on a straight line. In the proposal system the depth information is obtained accurately by dividing the three dimensional structure into multi layrs with different disparity, and correcting a wrong disparity caused by occlusion from near to far. In addition, we developed an algorithm which composes the virtual three dimensional space by interpolating the obtained depth information. (2) Construction of gesture-based man-machine interface We cunstructed a compact gesture-based man-machine interface. Hand gesture is used to access the virtual three dimensional space. Two cameras are set on the top of the monitor and users input information by moving their hand in front of the monitor. The stereo moving scene of the hand is analyzed in the workstation and the hand motion in the three dimensional space is detected. In the proposed system, the motion and form of the hand are detetected indeopendently of user's clothes and background. (3) Command input by gesture We developed an algorithm to recognize commands from the detected three dimensional motion of the hand. In the proposal system, the gesture is recognized by decomposing a continuous operation into individual operation units, and matching the unit with the model. Various three dimensional commands can be interpreted by this algorithm.
|