Project/Area Number |
20700193
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Single-year Grants |
Research Field |
Sensitivity informatics/Soft computing
|
Research Institution | Tohoku University |
Principal Investigator |
SAKAMOTO Shuichi Tohoku University, 電気通信研究所, 助教 (60332524)
|
Project Period (FY) |
2008 – 2009
|
Project Status |
Completed (Fiscal Year 2009)
|
Budget Amount *help |
¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2009: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
Fiscal Year 2008: ¥1,950,000 (Direct Cost: ¥1,500,000、Indirect Cost: ¥450,000)
|
Keywords | 感性インタフェース / マルチモーダル情報処理 / 視聴覚統合 / ユニバーサルデザイン / 感性情報処理 / 音声知覚 / 読唇 / 単語了解度 |
Research Abstract |
The aim of this study is to develop next generation audio-visual communication system. For this purpose, I investigated how people integrate speech sound and moving image of talker's face not only for understanding speech signal but also for perceiving a kind of sensational and emotional (Kansei) information. I focused on the effect of speech-rate for audio-visual speech understanding, because "speaking slowly" is very good way to talk to older adults, especially hearing impaired listeners. I used speech-rate conversion technique to slow down speech-rate and synthesized signal was combined with original movie. The results of experiment suggested that older adults are tolerant of audio-visual asynchrony. This fact implies the possibility of new audio-visual communication system, which enhance speech understanding by slowing down auditory information.
|