Project/Area Number |
08650511
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
計測・制御工学
|
Research Institution | CHUO UNIVERSITY |
Principal Investigator |
SAKANE Shigeyuki Chuo University, Faculty of Science & Engineering, Professor, 理工学部・経営システム工学科, 教授 (10276694)
|
Project Period (FY) |
1996 – 1998
|
Project Status |
Completed (Fiscal Year 1998)
|
Budget Amount *help |
¥2,300,000 (Direct Cost: ¥2,300,000)
Fiscal Year 1998: ¥400,000 (Direct Cost: ¥400,000)
Fiscal Year 1997: ¥900,000 (Direct Cost: ¥900,000)
Fiscal Year 1996: ¥1,000,000 (Direct Cost: ¥1,000,000)
|
Keywords | sensor planning / active vision / センサクロクニング |
Research Abstract |
We have developed a couple of systems aiming to establish active vision sensor planning technology which allows flexible and adaptive capabilities for robot systems. Following is a summary of the research : (1) We have developed a sensor planning system imbeded in a STRIPS-liketask planning system. As for the vision sensor planning, automatic determination of camera position and zoom control has been developed. Simulation experiments demonstrate effectiveness of the system. (2) Human visual system has evolved to have varying resolution in the retina : the fovea vision has very high resolution in a small region whereas the peripheral vision has low spatial resolution in a wide field. By taking these functions into account, we have developed an active vision system which permits tight cooperation between fovea and peripheral vision. We employed a pan-tilt camera for the fovea vision subsystem and stereo cameras with wide angle lenses for the peripheral vision subsystem. Since input images through the peripheral vision are geometrically deformed, we use a system which can perform real-time correction of the deformation. Experiments of a tracking task demonstrates fundamental functions and advantages of the proposed active vision system. (3) Recently much attention has been paid to Augmented Reality (AR) which supports human's daily life by blending multi modal information into real world, typically on the objects being considered. We make an attempt to make a flexible human-robot interface using an extended digital desk for guiding and teaching robot system. A prototype system consists of a projector subsystem for information display and a realtime tracking vision subsystem for recognizing human's action. We implemented two levels of interactions using a virtual teaching panel and projected real image of the task environment on a desk for intuitive teaching. Experiments of teaching robot tasks demonstrate usefulness of the proposed system.
|