A study on mapping of multi-modal environment for mobile robot in semi-dynamic environment
Project/Area Number |
20500184
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Chuo University |
Principal Investigator |
SAKANE Shigeyuki Chuo University, 理工学部, 教授 (10276694)
|
Research Collaborator |
HONGJUN Zhou 同済大学(中国), 准教授
|
Project Period (FY) |
2008 – 2010
|
Project Status |
Completed (Fiscal Year 2010)
|
Budget Amount *help |
¥4,550,000 (Direct Cost: ¥3,500,000、Indirect Cost: ¥1,050,000)
Fiscal Year 2010: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2009: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
Fiscal Year 2008: ¥2,080,000 (Direct Cost: ¥1,600,000、Indirect Cost: ¥480,000)
|
Keywords | 移動ロボット / 地図生成 / マルチモーダル / 準動的環境 / 無線ICタグ |
Research Abstract |
In order to explore multimodal mapping techniques for mobile robot to work in semi-dynamic environment, we have developed three subsystems, (1) a subsystem to improve accuracy of object's pose by using color image data in addition to its range data, (2) a subsystem to estimate direction of finger pointing gesture of a tutor in teaching objects in the environment, and (3) a subsystem to teach semantic information of the objects, such as names and owners, ususing voice interaction between a tutor and a robot. And we validated these subsystems.
|
Report
(4 results)
Research Products
(9 results)