1992 Fiscal Year Final Research Report Summary
Outdoor World Modeling by Intelligent Integration of Multi-Visual Information
Project/Area Number |
03805029
|
Research Category |
Grant-in-Aid for General Scientific Research (C)
|
Allocation Type | Single-year Grants |
Research Field |
情報工学
|
Research Institution | Osaka University |
Principal Investigator |
ASADA Minoru Osaka University, Mech. Engg. for Associate Professor Computer-Controlled Machinery, 工学部, 助教授 (60151031)
|
Co-Investigator(Kenkyū-buntansha) |
MIURA Jun Osaka University, Mech. Engg. for Research Associate Computer-Controlled Machine, 工学部, 助手 (90219585)
SHIRAI Yoshiaki Osaka University, Mech. Engg. for Professor Computer-Controlled Machinery, 工学部, 教授 (50206273)
|
Project Period (FY) |
1991 – 1992
|
Keywords | Intensity image / Range image / Color image / Stereo images / Sensor fusion / Outdoor scene / Geometric modeling / Reliability |
Research Abstract |
In this project, we proposed a method of scene interpretation which dynamically integrates multi-visual sensory data such as intensity and range images into the scene descriptions according to the intermediate results of the sensory data processings that are obtained by using the knowledge of objects in the scene and the properties of the individual sensory data. In the first year, we found that geometric modeling of one object class by using multi-visual information was useful for extraction of the objects in this class and for reasoning of spatial relationships between objects in the scene. This year, we extended our method so that it could be applicable to various kindsof object classes. That is, we developed a model-driven spatial reasoning system which extracted various kinds of objects from the background and determined the geometric structures of these objects and unexplored regions as well. Research results are as follows: (1) In the case of stereo-color images, the range data obtained from a feature based stereo matching method are sparse, therefore they are not sufficient for segmentation of the scene. Then, we utilized the result of region segmentation by using the color information which was useful to recover the scene structure in terms of planar patches extracted in the disparity space. As a result, the scene structure was roughly recovered, but more accurate range information is needed to determine the details of the scene structure. (2) In the case of a pair of dense range data and an intensity image, we determined the parameters of the parametric model from the uncertainties of planar patches estimated from the range data. It has been shown that the uncertainty depends on two kinds of error sources: probable and systematic errors. We revealed their relationships and applied it to the problem of object recognition.
|
Research Products
(8 results)