2017 Fiscal Year Final Research Report
Platform for concept formation based on multimodal information using a cloud VR system
Project/Area Number |
16K16133
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Intelligent robotics
|
Research Institution | Ritsumeikan University |
Principal Investigator |
|
Research Collaborator |
TANIGUCHI Tadahiro 立命館大学, 情報理工学部, 教授
|
Project Period (FY) |
2016-04-01 – 2018-03-31
|
Keywords | マルチモーダル / 概念 / サイバーフィジカル / ベイズモデル / ヒューマンロボットインタラクション |
Outline of Final Research Achievements |
In this research, we have constructed SIGVerse + L, a cloud virtual environment that enables learning of object and location concepts based on multimodal information by interaction between large-scale people and robots. First, we constructed a function to reflect the subject's voice, line of sight, and body motion on the avatar of the virtual environment. Next, the ROS implements the function of collecting multimodal information such as speech of the subject, image of the environment, position on the map, etc. by the robot in the virtual environment. Finally, location concept learning based on human robot interaction using SIGVerse + L was achieved by Bayesian model based on multimodal information.
|
Free Research Field |
知能ロボティクス
|