Platform for concept formation based on multimodal information using a cloud VR system
Project/Area Number |
16K16133
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Intelligent robotics
|
Research Institution | Ritsumeikan University |
Principal Investigator |
|
Research Collaborator |
TANIGUCHI Tadahiro 立命館大学, 情報理工学部, 教授
|
Project Period (FY) |
2016-04-01 – 2018-03-31
|
Project Status |
Completed (Fiscal Year 2017)
|
Budget Amount *help |
¥3,900,000 (Direct Cost: ¥3,000,000、Indirect Cost: ¥900,000)
Fiscal Year 2017: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
Fiscal Year 2016: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
|
Keywords | マルチモーダル / 概念 / サイバーフィジカル / ベイズモデル / ヒューマンロボットインタラクション / 概念獲得 / 記号創発 / 転移学習 / 認知モデル / 仮想空間 / 教師なし学習 / 知能ロボット / 確率的モデル / ベイズ / 概念形成 / 仮想環境 / 知能ロボティック |
Outline of Final Research Achievements |
In this research, we have constructed SIGVerse + L, a cloud virtual environment that enables learning of object and location concepts based on multimodal information by interaction between large-scale people and robots. First, we constructed a function to reflect the subject's voice, line of sight, and body motion on the avatar of the virtual environment. Next, the ROS implements the function of collecting multimodal information such as speech of the subject, image of the environment, position on the map, etc. by the robot in the virtual environment. Finally, location concept learning based on human robot interaction using SIGVerse + L was achieved by Bayesian model based on multimodal information.
|
Report
(3 results)
Research Products
(20 results)