Project/Area Number |
15360213
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Measurement engineering
|
Research Institution | Tohoku University |
Principal Investigator |
DEGUCHI Koichiro Tohoku University, Graduate School of Information Science, Professor, 大学院・情報科学研究科, 教授 (30107544)
|
Co-Investigator(Kenkyū-buntansha) |
OKATANI Takayuki Tohoku University, Graduate School of Information Science, Associate Professor, 大学院・情報科学研究科, 助教授 (00312637)
USHIDA Shun Tohoku University, Graduate School of Information Science, Research Assistant, 大学院・情報科学研究科, 助手 (30343114)
中島 平 東北大学, 大学院・工学研究科, 講師 (30312614)
|
Project Period (FY) |
2003 – 2005
|
Project Status |
Completed (Fiscal Year 2005)
|
Budget Amount *help |
¥14,500,000 (Direct Cost: ¥14,500,000)
Fiscal Year 2005: ¥4,200,000 (Direct Cost: ¥4,200,000)
Fiscal Year 2004: ¥4,400,000 (Direct Cost: ¥4,400,000)
Fiscal Year 2003: ¥5,900,000 (Direct Cost: ¥5,900,000)
|
Keywords | Computer Vision / Active Vision / 3-Dimension Vision / Space Perception / Robot Vision / Motion Image Processing / Camera Calibration / Cooperative Vision |
Research Abstract |
This research project aimed to develop an active vision strategy to let a robot obtain 3D scene understanding from his stereo eyes. That is, the robot constructs a 3D map of an unknown environment by himself using his own eyes (cameras) and his own intentional actions. The task to get to know the external world using vision is absolutely difficult for artificial systems, such as robots, while our human achieves it easily without teacher or advance knowledge. In this research, we placed an answer based on a key concept of the invariance against our actions. We reported a realization of the direct perception of three-dimensional space in a robot. We proposed a method to calibrate the stereo eyes of a robot to build 3D map in his brain. It does not need an external reference of calibration objects but a combination of active motion and visual perception. The idea is common to so-called the self-calibration to some extent. But, the calibration and the following construction of the 3D map still considered separately in the self-calibration. In our method, we combine action and vision to achieve totally efficient calibration and the construction of the 3D map. We rely only on the consistency in the calibration results. We introduced the fact that "stationary object both in the real environment and on the robot's 3D map never moves even when the robot moves around". This means that we accept a calibration error and resulting geometric distortion of the constructed 3D map. We consider that the most important for the 3D map is the consistency between the intended motion of the robot and the perceived image by the robot. We showed the feasibility of this idea with plenty of simulation experiments and implementation on actual robots.
|