2002 Fiscal Year Final Research Report Summary
Research of 3D Image Acquisition and Its Consistent Representation for Sared-Type Real Image Space
Project/Area Number |
13650436
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
情報通信工学
|
Research Institution | Kanagawa University |
Principal Investigator |
SAITO Takahiro Kanagawa University, 工学部, 教授 (10150749)
|
Co-Investigator(Kenkyū-buntansha) |
KOMATSU Takashi 神奈川大学, 工学部, 助手 (80241115)
|
Project Period (FY) |
2001 – 2002
|
Keywords | Shared-Space Communication / Raser-Radar Image / 3D Consistent Representation / Lage-Scale Texture |
Research Abstract |
Toward future 3D image communication, we have started studying the "Multimedia Ambiance Communication", a kind of shared-space communication, and adopted an approach to design the 3D-image space using actual images of outdoor scenery, by introducing the concept of the three-layer model of long-, mid- and short-range views. The long- and mid-range views do not require precise representation of their 3D structure, and hence we employ the setting representation like stage settings to approximate their 3D structure according to the slanting-plane-model. We deal with an approach to produce the consistent setting representation for describing long- and mic-range views from range and texture data measured with a laser scanner and a digital camera located at multiple viewpoints. The production of such a representation requires the development of several techniques: nonlinear smoothing of raw range data, plane segmentation of range data, registration of multi-viewpoint range data, integration of multi-viewpoint setting representations and texture mapping onto each setting plane. In this paper, we concentrate on the plane segmentation and the multi-viewpoint data registration. Our plane segmentation method is based on the concept of the region competition, and can precisely extract fitting planes from the range data. Our registration method uses the equations of the segmented planes corresponding between two different viewpoints to determine the 3D Euclidean transformation between them. A unifying consistent setting representation can be constructed by integrating multiple setting representations for multiple viewpoints.
|
Research Products
(18 results)