Project/Area Number |
07680419
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
情報システム学(含情報図書館学)
|
Research Institution | HOKKAIDO UNIVERSITY |
Principal Investigator |
YAMAMOTO Tsuyoshi Hokkaido Univ., Computing Center, Pro., 大型計算機センター, 教授 (80158287)
|
Co-Investigator(Kenkyū-buntansha) |
AOKI Yoshinao Hokkaido Univ., Fac.of Eng., Pro., 工学部, 教授 (90001180)
|
Project Period (FY) |
1995 – 1996
|
Project Status |
Completed (Fiscal Year 1996)
|
Budget Amount *help |
¥2,200,000 (Direct Cost: ¥2,200,000)
Fiscal Year 1996: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 1995: ¥1,200,000 (Direct Cost: ¥1,200,000)
|
Keywords | Stereogram / Optical Flow / Parallax images / Computer Graphics / Pattern Recognition / Voxel Model / Texture Mapping / 3D Reconstruction / ビデオ映像 / 立体視 / 画像理解 / オプティカル・フロー |
Research Abstract |
3D stereogram using parallax views is a technique to demonstrate realism and ambiance of images. Many display devices based on this method are developed by companies, but software contents must be created newly for the device. Stereogram display using parallax views does not depend on complete 3D information of the scene. It needs only two images with parallax. As the nature of this methods, this method requires less computation than conventional 3D model reconstruction techniques. The goal of this research is to establish techniques to produce parallax images from continuous video frames taken by single lens video camera. We performed the research by two different approaches. One approach is based on depth estimation from optical flow data computed from continuous video frames. If video frames were taken by parallel movement of camera, the amount of pixel shift (optical flow) corresponds to the depth of the pixel. By using depth information estimated by optical flow, two images with parallax can be computed. We developed basic algorithm and experimental system for the method. The other approach is based on volume space reconstruction from video frames that is taken with changing viewpoint. The method reconstructs 3D structure and surface texture of the scene. This method is more potentiality than optical flow based approach. However static space constraint for the objective scene applies.
|