Project/Area Number |
20700106
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Single-year Grants |
Research Field |
Media informatics/Database
|
Research Institution | Advanced Telecommunications Research Institute International |
Principal Investigator |
YONEZAWA Tomoko Advanced Telecommunications Research Institute International, 知能ロボティクス研究所, 研究員 (90395161)
|
Project Period (FY) |
2008 – 2010
|
Project Status |
Completed (Fiscal Year 2010)
|
Budget Amount *help |
¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2010: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2009: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2008: ¥1,950,000 (Direct Cost: ¥1,500,000、Indirect Cost: ¥450,000)
|
Keywords | 擬人的媒体の表現 / 日常挨拶マルチモーダル表現 / 性差 / 年齢差 / 個人差 / 個人内表現差 / 個人間表現差 / 表現のモーフィング / 性差・年齢差による個人差 / 個人内・個人間表現差 / 表現の中点 / 話しかけ意図行動 / 日本挨拶マルチモーダル表現 / 音声表現の非線形知覚 |
Research Abstract |
Robots and virtual agents have each own (given) character for each product using their original multimodal data set of voices and motions. However, their given characters sometimes interrupt various expressions for communicative purposes. In this research theme, we have proposed a multimodal expression capturing method for individualization of the anthropomorphic embodied media. Multimodal-FONT is the name of the database which this project has captured. As expressed in the name, each individual data of the multimodal expressions for daily greeting was captured with varied age groups and genders. Finally we could gather over 30 persons' multimodal expressions : voice, head and upper-body gestures, and directions of the gaze at the same time. The recorded data are analyzed using the prosody of the voice, motion characters, and gaze patterns, and variation widths of the whole expressions. As results, we could confirm that there are some kinds of expression patterns relevant or irrelevant to their gender groups or age groups.
|