Development of integrated speech and image measurement interfaces for multimodal interaction
Project/Area Number |
24560523
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Measurement engineering
|
Research Institution | Kumamoto University |
Principal Investigator |
OGATA Kohichi 熊本大学, 自然科学研究科, 准教授 (10264277)
|
Project Period (FY) |
2012-04-01 – 2015-03-31
|
Project Status |
Completed (Fiscal Year 2014)
|
Budget Amount *help |
¥5,460,000 (Direct Cost: ¥4,200,000、Indirect Cost: ¥1,260,000)
Fiscal Year 2014: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
Fiscal Year 2013: ¥1,820,000 (Direct Cost: ¥1,400,000、Indirect Cost: ¥420,000)
Fiscal Year 2012: ¥2,080,000 (Direct Cost: ¥1,600,000、Indirect Cost: ¥480,000)
|
Keywords | 信号処理 / 音声合成 / 計測 / インタフェース |
Outline of Final Research Achievements |
The aim of this study is to develop synthesis and measurement interfaces for multimodal interaction. Speech synthesis technique based on speech production process and image processing technique such as eye-gaze estimation, eye-gaze-driven control, and motion capture were combined to create various interfaces which support multimodal interaction. For example, it is revealed that a vocal-tract mapping interface can be a useful measurement tool to judge the dexterity of the finger-nose movement. An attempt to estimate the vocal tract shape from vowel sounds via the vocal-tract mapping interface resulted in successful estimation of the vocal tract shape and visual trajectories on the mapping interface window. Moreover, an eye-gaze interface enabled us to control the motion of a radio-controlled car by eye-gaze.
|
Report
(4 results)
Research Products
(26 results)