Analysis and modeling of nonverbal behaviors in explanation using time-series multimodal analysis
Project/Area Number |
25730132
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Intelligent informatics
|
Research Institution | Tokyo Institute of Technology |
Principal Investigator |
OKADA Shogo 東京工業大学, 総合理工学研究科(研究院), 助教 (00512261)
|
Project Period (FY) |
2013-04-01 – 2015-03-31
|
Project Status |
Completed (Fiscal Year 2014)
|
Budget Amount *help |
¥2,860,000 (Direct Cost: ¥2,200,000、Indirect Cost: ¥660,000)
Fiscal Year 2014: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2013: ¥2,210,000 (Direct Cost: ¥1,700,000、Indirect Cost: ¥510,000)
|
Keywords | 社会的信号処理 / パターン認識 / 会話分析 / データマイニング / マルチモーダルインタラクション / 説明スキル評価 / 機械学習 / 会話情報学 |
Outline of Final Research Achievements |
This research focuses on modeling the explanation-performance of participants in group conversation. We present a multimodal analysis of explanation performance in group conversation as evaluated by external observers. A new multimodal data corpus, including the performance score of participants, is collected through group storytelling task. We extract multimodal features regarding explanators and listener from a manual description of spoken dialog and from various nonverbal patterns, including speaking turn, utterance prosody, head gesture, hand gesture, and head direction of each participant. We also extract multimodal co-occurrence features, such as utterance with head gestures. In the experiment, we modeled the relationship between the performance indices and the features using machine learning techniques. Experimental results show that the highest accuracy is 82% for the total explanation performance obtained with a combination of these features in a binary classification task.
|
Report
(3 results)
Research Products
(22 results)