Recognition of Behaving Area and Objects by Omni-directional Camera
Project/Area Number |
15500120
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Tamagawa University |
Principal Investigator |
YAMADA Hiromitsu Tamagawa University, Faculty of Engineering, Professor, 工学部, 教授 (10328023)
|
Co-Investigator(Kenkyū-buntansha) |
MORI Terunori Tamagawa University, Faculty of Engineering, Professor, 工学部, 教授 (60245975)
|
Project Period (FY) |
2003 – 2004
|
Project Status |
Completed (Fiscal Year 2004)
|
Budget Amount *help |
¥3,200,000 (Direct Cost: ¥3,200,000)
Fiscal Year 2004: ¥1,600,000 (Direct Cost: ¥1,600,000)
Fiscal Year 2003: ¥1,600,000 (Direct Cost: ¥1,600,000)
|
Keywords | object recognitionextraction of action environments / extraction of disparity / motion parallax extraction / model-based object recognition / global solution and local solution / elastic edge sequence matching method / dynamic programming / 動的計画法 |
Research Abstract |
Recognition of objects and acquisition of behaving area are performed by co-operation of extraction of 3D space by bottom-up processing and object extraction by top-down model based processiong, from dynamic images taken by a camera mounted on a moving object. Peripheral part of an image from an omni-directional camera is used to extract optical flow by bottom-up processing, and the central part of the image is used to extract objects by top-down processing. It is difficult to co-operate estimation of ego-motion of camera system, acquisition of 3D information of static objects in a scene, pre-acquisited knowledges about objects, and extraction of objects which are moving by themselves, because they are linked each other, and to solve some proble there we have to solve all problems. In this report, we try h solve the problem by using a cue from ego-motion using motion-parallax and model based object recognition. In chapter 1, range information (distance from a camera to object) is extracted by motion paralax extraction method where a camera is moved vertically a timing of crossing an edge between neighboring pixels is measured. In chapter 2, ego motion information is extracted by using the optical flow in peripheral part of wide camera where the camera is moving to the direction of the camera. In this case the paralax is exracted by the motion paralax extraction method, and then the verosicty of the image is extracted, and thenthe moving direction of the moving object is calculated. In chapter 3, conventionnal matching method for static frame by 2D model is applied at first. Here, a scene where a walking man is taken from a side is used, and position of each components, left/right arm, leg are located. Besides of it, by introduction of 3D model, self-occulusion phenomenon is considered in the analysis.
|
Report
(3 results)
Research Products
(3 results)