Innvestigate on audio-visual integration of speech sound
Project/Area Number |
26750215
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Rehabilitation science/Welfare engineering
|
Research Institution | National Rehabilitation Center for Persons with Disabilities |
Principal Investigator |
Tomomi Mizuochi-Endo 国立障害者リハビリテーションセンター(研究所), 研究所・脳機能系障害研究部, 特別研究員(RPD) (90568859)
|
Project Period (FY) |
2014-04-01 – 2017-03-31
|
Project Status |
Completed (Fiscal Year 2016)
|
Budget Amount *help |
¥2,470,000 (Direct Cost: ¥1,900,000、Indirect Cost: ¥570,000)
Fiscal Year 2016: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2015: ¥780,000 (Direct Cost: ¥600,000、Indirect Cost: ¥180,000)
Fiscal Year 2014: ¥1,040,000 (Direct Cost: ¥800,000、Indirect Cost: ¥240,000)
|
Keywords | 視聴覚統合 / 音声 / 脳波 / 皮質内脳波 / 母音 / ECoG / 聴覚 / 弁別 / 感覚統合 / 脳損傷 |
Outline of Final Research Achievements |
Visual speech cue is important for speech perception. In this study, we focus on formant frequency as acoustic information that visual speech cue conveys. To investigate how to use formant information from articulation, we record EEG and ECoG using synthesized vowels which differs only formant frequency. In healty adult EEG, there was no significant difference among vowels. On the other hand, ECoG channels on the left posterior superior temporal gyrus showed significant differences among vowels in 200 ms after sound onset in audio-only condition. These significant differences disappeared in audio-visual condition. This result indicated that visual information may change neural response to sound in vicinity of auditory cortex.
|
Report
(4 results)
Research Products
(2 results)