Project/Area Number |
26630099
|
Research Category |
Grant-in-Aid for Challenging Exploratory Research
|
Allocation Type | Multi-year Fund |
Research Field |
Intelligent mechanics/Mechanical systems
|
Research Institution | Kyushu University |
Principal Investigator |
Kurazume Ryo 九州大学, システム情報科学研究科(研究院, 教授 (70272672)
|
Co-Investigator(Kenkyū-buntansha) |
Yumi Iwashita 九州大学, 大学院システム情報科学研究院, 准教授 (70467877)
|
Project Period (FY) |
2014-04-01 – 2016-03-31
|
Project Status |
Completed (Fiscal Year 2015)
|
Budget Amount *help |
¥3,900,000 (Direct Cost: ¥3,000,000、Indirect Cost: ¥900,000)
Fiscal Year 2015: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2014: ¥2,470,000 (Direct Cost: ¥1,900,000、Indirect Cost: ¥570,000)
|
Keywords | 知能ロボティクス / 環境知能化 / ウエアラブルセンサ / サービスロボット / 一人称画像 |
Outline of Final Research Achievements |
In this research, we proposed a new concept of "fourth-person sensing" for service robots, which combines wearable cameras (the first-person viewpoint), sensors mounted on robots (the second-person viewpoint), and sensors embedded in the informationally structured environment (the third-person viewpoint) to recognize the surroundings of the service robot correctly and efficiently. Each sensor has its advantage and disadvantage, while the proposed concept can compensate the disadvantages by combining the advantages of all sensors. This technique is quite useful for accurate understanding of a user's intention and context of the scene. As one of applications of the proposed concept, we developed a HCI system which combines the first-person sensing and the third-person sensing, and showed the effectiveness of the proposed concepts through experiments.
|