• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Visual navigation technology for intelligent vehicle - Road model for autonomous driving and alarming

Research Project

Project/Area Number 12650425
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field Measurement engineering
Research InstitutionOsaka University

Principal Investigator

YAGI Yasushi  Osaka university, Department of Systems and Human science, Associate Professor, 大学院・基礎工学研究科, 助教授 (60231643)

Project Period (FY) 2000 – 2001
Project Status Completed (Fiscal Year 2001)
Budget Amount *help
¥3,800,000 (Direct Cost: ¥3,800,000)
Fiscal Year 2001: ¥1,900,000 (Direct Cost: ¥1,900,000)
Fiscal Year 2000: ¥1,900,000 (Direct Cost: ¥1,900,000)
KeywordsActive contour model / Visual navigation / 3D reconstruction / Reactive navigation / Road Tracking / 経路誘導 / 三次元形状復元 / 単眼動画像
Research Abstract

Several researchers have investigated visually-guided navigation of autonomous road vehicles using various sorts of visual information sources, including color, disparity, range, optical flow, etc. Such research has emphasized the importance of determining the road ahead as a significant component of the development of intelligent road vehicles. In this project, we propose three models for solving the general problem of a road following and 3D-road shape reconstruction for a smart vehicle. Three following methods are based on active contour models (ACM).
1) The method assumes that road boundaries are parallel and that the width of the road is constant. We then detect and track the road region in the image using coupled active contour models subject to a parallelism constraint. The system then generates a 3D-road model from a single image. We evaluate the effectiveness of the method by applying to real road scenes comprising more than 4000 images.
2) The method is for reactive visual navi … More gation based on omnidirectional sensing. The robot is projected at the center of the input image by the omnidirectional image sensor HyperOmni Vision. Therefore, rough free space around the robot can be extracted by active contour model. The method produces low-level commands that keep the robot in the middle of the free space and avoid collision by balancing shape of extracted active contours. The robot can avoid obstacles and move along the corridor by tracking the close-looped curve with an active contour model. Furthermore, the method can directly represent the spatial relations between the environment and the robot on the image coordinate. Thus the method can control the robot without geometrical 3D reconstruction.
3) Third model is for navigating the robot along a route. The route is memorized by a series of consecutive omnidirectional images at the horizon while the robot moves to the goal position. While the robot is navigating to the goal point, the input is matched with memorized spatio-temporal route pattern images by using dual active contour models and we can estimate the exact robot position. Less

Report

(3 results)
  • 2001 Annual Research Report   Final Research Report Summary
  • 2000 Annual Research Report
  • Research Products

    (11 results)

All Other

All Publications (11 results)

  • [Publications] 八木, ブラデイ, 川崎, 谷内田: "道路追跡と3次元道路形状復元のための動的輪郭道路モデル"電子情報通信学会論文誌. Vol.J84-D-II, No.8. 1597-1607 (2001)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] 八木, 長井, 山澤, 谷内田: "全方位視覚情報を用いたロボット誘導-経路沿い行動と衝突回避"システム制御情報学会論文誌. Vol.14, No.4. 209-217 (2001)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] Y.Yagi, H.Nagai, K.Yamazawa, M.Yachida: "Reactive Visual Navigation based on Omnidirectional Sensing-Path Following and Collision Avoidance-"Journal of Intelligent and Rgbotic Systems. Vol.31, No.4. 379-395 (2001)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] Y. Yagi, M. Brady, Y. Kawasaki, M. Yachida: "Active contour road model for smart vehicle"Transactions of the IEICE DII. Vol.J84-D-II, No.8. 1597-1607 (2001)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] Yasushi Yagi, Hiroyuki Nagai, Kazumasa Yamazawa, Masahiko Yachida: "Reactive Visual Navigation based on Omnidirectional Sensing - Path Following and Collision Avoidance -"Transactions of the ISCIE. Vol.14, No.4. 209-217 (2001)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] Yasushi Yagi, Hiroyuki Nagai, Kazumasa Yamazawa, Masahiko Yachida: "Reactive Visual Navigation based on Omnidirectional Sensing - Path Following and Collision Avoidance -"Journal of Intelligent and Robotic Systems. Vol.31, No.4. 379-395 (2001)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] 八木, ブラディ, 川崎, 谷内田: "道路追跡と3次元道路形状復元のための動的輪郭道路モデル"電子情報通信学会論文誌. Vol.J84-D-II, No.8. 1597-1607 (2001)

    • Related Report
      2001 Annual Research Report
  • [Publications] 八木, 長井, 山澤, 谷内田: "全方位視覚情報を用いたロボット誘導-経路沿い行動と衝突回避"システム制御情報学会論文誌. Vol.14, No.4. 209-217 (2001)

    • Related Report
      2001 Annual Research Report
  • [Publications] Y.Yagi, H.Nagai, K.Yamazawa, M.Yachida: "reactive Visual navigation based on Omnidirectional Sensing -Path Following and Collision Avoidance-"Journal of Intelligent and robotic Systems. Vol.31, No.4. 379-395 (2001)

    • Related Report
      2001 Annual Research Report
  • [Publications] Y.Yagi,M.Brady,Y.Kawasaki,M.Yachida: "Active contour road model for smart vehicle"Proc.Int.Conf.Pattern Recognition. Vol.3. 819-822 (2000)

    • Related Report
      2000 Annual Research Report
  • [Publications] 八木,Brady,川崎,谷内田: "知能自動車のための道路表現モデル"画像の認識・理解シンポジウム. Vol.2. 313-318 (2000)

    • Related Report
      2000 Annual Research Report

URL: 

Published: 2000-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi