Active Recognition of Environment by Omini-Directional Vision
Grant-in-Aid for General Scientific Research (B)
|Allocation Type||Single-year Grants |
|Research Institution||Osaka University |
TSUJI Saburo Osaka Univ., Faculty of Engineering Science, Professor, 基礎工学部, 教授 (60029527)
IMAI Masakazu Nara Graduate School of Science and Technology, Associate Professor, 助教授 (60193653)
YAMADA Seiji Osaka University, Institute of Industrial Science, Assistant Professor, 産業科学研究所, 講師 (50220380)
ISHIGURO Hiroshi Osaka Univ., Faculty of Engineering Science, Research Associate, 基礎工学部, 助手 (10232282)
XU Gang Osaka Univ., Faculty of Engineering Science, Assistant Professor, 基礎工学部, 講師 (90226374)
|Project Period (FY)
1991 – 1993
Completed (Fiscal Year 1993)
|Budget Amount *help
¥6,600,000 (Direct Cost: ¥6,600,000)
Fiscal Year 1993: ¥900,000 (Direct Cost: ¥900,000)
Fiscal Year 1992: ¥1,900,000 (Direct Cost: ¥1,900,000)
Fiscal Year 1991: ¥3,800,000 (Direct Cost: ¥3,800,000)
|Keywords||computer vision / omni-directional vision / stereo / motion vision / autonomous robot / planning / world model / artificial intelligence / コンピュ-タビジョン|
The objective of research is to develop an omni-directional vision and its active utilization for modeling the environment of an autonomous robot.
(1) Omni-diectional stereo
We can acquire a omni-directional view from an image sequence taken while the camera is swiveled, however the it does not contain range information. We develop an omni-directional stereo from two omni-directional views sampled at two vertical slits in the imagery taken while the camera moves along a circular path.
(2) Environment map from omni-directional stereo
A robot plans the next observation point from a coarse environment map yielded from the omni-directional stereo and iterates plan-move-observe. By fusing the coarse environment maps at different point, a more precise model is obtained.
(3) Environment map from active observation
Since the base line of the omni-directional stereo is short, the range estimate is not precise. Thus, two omni-views at different points provide more precise range information, but estimations of both the distance between the two points and the robot rotation are needed. We develop a new active vision method, which guides the robot so as to keep the disparity of two feature points at 360 degree. Thus, the robot rotation is zero, and we can precisely estimate the environment geometry.
(4) Qualitative map
A qualitative environment map is useful for the robot navigation. Our robot autonomously plan the environment observation and takes route-panorama views and omni-directional views. By fusing these data, the robot can build the qualitative environment map. We verify the method by experiments in real indoor environments with many complex objects.
Report (4 results)
Research Products (32 results)