• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Recognition of Behaving Area and Objects by Omni-directional Camera

Research Project

Project/Area Number 15500120
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field Perception information processing/Intelligent robotics
Research InstitutionTamagawa University

Principal Investigator

YAMADA Hiromitsu  Tamagawa University, Faculty of Engineering, Professor, 工学部, 教授 (10328023)

Co-Investigator(Kenkyū-buntansha) MORI Terunori  Tamagawa University, Faculty of Engineering, Professor, 工学部, 教授 (60245975)
Project Period (FY) 2003 – 2004
Project Status Completed (Fiscal Year 2004)
Budget Amount *help
¥3,200,000 (Direct Cost: ¥3,200,000)
Fiscal Year 2004: ¥1,600,000 (Direct Cost: ¥1,600,000)
Fiscal Year 2003: ¥1,600,000 (Direct Cost: ¥1,600,000)
Keywordsobject recognitionextraction of action environments / extraction of disparity / motion parallax extraction / model-based object recognition / global solution and local solution / elastic edge sequence matching method / dynamic programming / 動的計画法
Research Abstract

Recognition of objects and acquisition of behaving area are performed by co-operation of extraction of 3D space by bottom-up processing and object extraction by top-down model based processiong, from dynamic images taken by a camera mounted on a moving object. Peripheral part of an image from an omni-directional camera is used to extract optical flow by bottom-up processing, and the central part of the image is used to extract objects by top-down processing.
It is difficult to co-operate estimation of ego-motion of camera system, acquisition of 3D information of static objects in a scene, pre-acquisited knowledges about objects, and extraction of objects which are moving by themselves, because they are linked each other, and to solve some proble there we have to solve all problems.
In this report, we try h solve the problem by using a cue from ego-motion using motion-parallax and model based object recognition.
In chapter 1, range information (distance from a camera to object) is extracted by motion paralax extraction method where a camera is moved vertically a timing of crossing an edge between neighboring pixels is measured.
In chapter 2, ego motion information is extracted by using the optical flow in peripheral part of wide camera where the camera is moving to the direction of the camera. In this case the paralax is exracted by the motion paralax extraction method, and then the verosicty of the image is extracted, and thenthe moving direction of the moving object is calculated.
In chapter 3, conventionnal matching method for static frame by 2D model is applied at first. Here, a scene where a walking man is taken from a side is used, and position of each components, left/right arm, leg are located. Besides of it, by introduction of 3D model, self-occulusion phenomenon is considered in the analysis.

Report

(3 results)
  • 2004 Annual Research Report   Final Research Report Summary
  • 2003 Annual Research Report
  • Research Products

    (3 results)

All 2004 Other

All Journal Article (2 results) Publications (1 results)

  • [Journal Article] 画像認識による歩行動作の解析2004

    • Author(s)
      山田博三
    • Journal Title

      電子情報通信学会東京支部学生会第9回研究発表会講演論文

      Pages: 98-98

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2004 Annual Research Report 2004 Final Research Report Summary
  • [Journal Article] Analysis of Walking by Image Recognition2004

    • Author(s)
      Yutaro TANAKA, Hiromitsu YAMADA
    • Journal Title

      Proceeding of 9th Student Conference of Tokyo chapter of IEICE Japan

      Pages: 98-98

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2004 Final Research Report Summary
  • [Publications] 田中勇太郎, 山田博三: "画像認識による歩行動作の解析"電子情報通信学会東京支部学生会第9回研究発表会講演論文集. 98 (2004)

    • Related Report
      2003 Annual Research Report

URL: 

Published: 2003-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi