Visual navigation technology for intelligent vehicle - Road model for autonomous driving and alarming
Project/Area Number |
12650425
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Measurement engineering
|
Research Institution | Osaka University |
Principal Investigator |
YAGI Yasushi Osaka university, Department of Systems and Human science, Associate Professor, 大学院・基礎工学研究科, 助教授 (60231643)
|
Project Period (FY) |
2000 – 2001
|
Project Status |
Completed (Fiscal Year 2001)
|
Budget Amount *help |
¥3,800,000 (Direct Cost: ¥3,800,000)
Fiscal Year 2001: ¥1,900,000 (Direct Cost: ¥1,900,000)
Fiscal Year 2000: ¥1,900,000 (Direct Cost: ¥1,900,000)
|
Keywords | Active contour model / Visual navigation / 3D reconstruction / Reactive navigation / Road Tracking / 経路誘導 / 三次元形状復元 / 単眼動画像 |
Research Abstract |
Several researchers have investigated visually-guided navigation of autonomous road vehicles using various sorts of visual information sources, including color, disparity, range, optical flow, etc. Such research has emphasized the importance of determining the road ahead as a significant component of the development of intelligent road vehicles. In this project, we propose three models for solving the general problem of a road following and 3D-road shape reconstruction for a smart vehicle. Three following methods are based on active contour models (ACM). 1) The method assumes that road boundaries are parallel and that the width of the road is constant. We then detect and track the road region in the image using coupled active contour models subject to a parallelism constraint. The system then generates a 3D-road model from a single image. We evaluate the effectiveness of the method by applying to real road scenes comprising more than 4000 images. 2) The method is for reactive visual navi
… More
gation based on omnidirectional sensing. The robot is projected at the center of the input image by the omnidirectional image sensor HyperOmni Vision. Therefore, rough free space around the robot can be extracted by active contour model. The method produces low-level commands that keep the robot in the middle of the free space and avoid collision by balancing shape of extracted active contours. The robot can avoid obstacles and move along the corridor by tracking the close-looped curve with an active contour model. Furthermore, the method can directly represent the spatial relations between the environment and the robot on the image coordinate. Thus the method can control the robot without geometrical 3D reconstruction. 3) Third model is for navigating the robot along a route. The route is memorized by a series of consecutive omnidirectional images at the horizon while the robot moves to the goal position. While the robot is navigating to the goal point, the input is matched with memorized spatio-temporal route pattern images by using dual active contour models and we can estimate the exact robot position. Less
|
Report
(3 results)
Research Products
(11 results)