Applying Panoramic Sensing to Multi Agent Robotic Systems
Grant-in-Aid for international Scientific Research
|Allocation Type||Single-year Grants|
|Research Institution||KYOTO UNIVERSITY|
ISHIGURO Hiroshi Department of Information Science, Kyoto University, 工学研究科, 助教授 (10232282)
NISHIMURA Toshikazu Department of Information Science, Kyoto University, 工学研究科, 助手 (00273483)
ISHIDA Toru Department of Information Science, Kyoto University, 工学研究科, 教授 (20252489)
BARTH Motthew College of Engineering, University of California, Riverside, リバーサイド校・工学部, 助教授
|Project Period (FY)
1995 – 1996
Completed(Fiscal Year 1996)
|Budget Amount *help
¥3,200,000 (Direct Cost : ¥3,200,000)
Fiscal Year 1996 : ¥1,600,000 (Direct Cost : ¥1,600,000)
Fiscal Year 1995 : ¥1,600,000 (Direct Cost : ¥1,600,000)
|Keywords||Omnidirectional Sensor / Omnidirectional Image / Multiple Robot / Relative Localization / Identification / Robot Navigation / Panoramic Vision / 視覚移動ロボット / 全方位視覚 / 分散協調システム / 全方位ステレオ / ビヘ-ビア|
This research approach focuses upon effectiveness of omnidirectional vision for mobile robots which constitute a multiple robotic system. With the omnidirectional vision, the robots can identify other robots and estimate their relative positions. We have dealt with the following five issues with respect to the multiple robotic system using the omnidirectional vision (Panoramic vision consists of both omnidirectional visual information obtained by rotating the camera and visual information obtained with linear motions of the camera).
(1) Algorithms to identify robots and estimate their relative positions.
(2) Planning methods for executing global tasks with multiple robots.
(3) Development of low-cost and compact omnidirectional vision sensors.
(4) Applying the algorithms and the omnidirectional vision sensor to real systems.
(5) Consideration on merits of the omnidirectional visual information.
Before executing given tasks, the robots need to identify corresponding robots in the omnidirectio
nal images and estimate their relative positions. An algorithm which we have developed solves this problem by using a simple constraint that the sum of angular intervals between two other robot projections in images taken by three robots will be 180 degrees if the three robots observe each other. The algorithm is robust and it can be used in complex environments in which moving and static obstacles exist.
Then, the robot execute given tasks. We have developed a behavior-based method as the planning method for a map-building task. Each robot has simple behaviors to explore unknown areas, reconstruct local environment structure and communicate with other robots. Since the system does not need centralized controls, it can execute the task robustly.
Finally, we have developed the low-cost and compact omnidirectional vision sensors, implemented the above-mentioned algorithms into a real robotic system, and confirmed the reliability for real systems. We consider the omnidirectional vision sensor and developed algorithms will be key technologies for the multiple robotic systems. Less
Research Output (16results)