On-line Reconstruction of 3-D Structure from Image Sequences Based on Spatio-temporal Information Propagation
Project/Area Number |
15500117
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Tokyo Metropolitan University |
Principal Investigator |
TAGAWA Norio Tokyo Metropolitan University, Graduate School of Engineering, Associate Professor, 工学研究科, 助教授 (00244418)
|
Co-Investigator(Kenkyū-buntansha) |
MINAGAWA Akihiro Tokyo Metropolitan University, Graduate School of Engineering, Research Associate, 工学研究科, 助手 (00305418)
|
Project Period (FY) |
2003 – 2004
|
Project Status |
Completed (Fiscal Year 2004)
|
Budget Amount *help |
¥3,000,000 (Direct Cost: ¥3,000,000)
Fiscal Year 2004: ¥800,000 (Direct Cost: ¥800,000)
Fiscal Year 2003: ¥2,200,000 (Direct Cost: ¥2,200,000)
|
Keywords | optical flow / Kalman filter / 3-D shape reconstruction / EM algorithm / MAP estimator / ML estimator / information propagation / image sequence analysis / 動画像 / 形状復元 / 多重解像度処理 / MAP-EMアルゴリズム / 陰影 |
Research Abstract |
1.Results of this project (1)Development of an information propagation toward a resolution direction We developed an optical flow computation algorithm using successive two frames, which can solve the aperture problem and the alias problem. Although the basic idea is applying the Kalman filter with a resolution direction, we extended it so as to determine the parameters usually needed to be known before applying the Kalman filter. Through numerical evaluations, it was confirmed that this algorithm correctly works. (2)Development of an information propagation toward a time direction We developed a depth computation algorithm based on information propagation toward a time direction using the optical flow estimate and its reliability computed by the above algorithm. This depth computation algorithm also is a concrete of the above Kalman filter having parameter determination function, and we incorporated an interpolation method of the depth, which was newly developed, into this algorithm. 2.Future plan In this project, we examined the case in which the intensity invariant constraint holds completely. However, for the case that a camera is fixed and target objects moves, such the constraint does not hold. Hence, we are planning to extend the algorithm developed in this project in order to treat the intensity variant situations.
|
Report
(3 results)
Research Products
(6 results)