Automatic description of human actions in the real world
Project/Area Number |
26540133
|
Research Category |
Grant-in-Aid for Challenging Exploratory Research
|
Allocation Type | Multi-year Fund |
Research Field |
Intelligent robotics
|
Research Institution | The University of Tokyo |
Principal Investigator |
Takano Wataru 東京大学, 大学院情報理工学系研究科, 准教授 (30512090)
|
Project Period (FY) |
2014-04-01 – 2017-03-31
|
Project Status |
Completed (Fiscal Year 2016)
|
Budget Amount *help |
¥3,640,000 (Direct Cost: ¥2,800,000、Indirect Cost: ¥840,000)
Fiscal Year 2016: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2015: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
Fiscal Year 2014: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
|
Keywords | 行動認識 / 機械学習 / 統計モデリング / 身体運動 / 自然言語 / 統計モデル / 動作認識 |
Outline of Final Research Achievements |
Artificial intelligence is expected to symbolically understand the real world. In this research, we have focused the human actions, and developed the fundamental framework to convert the actions to their relevant descriptions. This framework encodes human whole body motions and objects to be manipulated, and subsequently establish the connection between these symbols and the descriptions. This connection allows for generating sentences describing the human actions. Especially, the extraction of the contexts from sequences of the actions and sentences leads to selecting words appropriate to the current situations, and results in making correct descriptions for the human actions.
|
Report
(4 results)
Research Products
(7 results)