Recording of Brain Activity for Speech Processing Usin Audio-Visual Stimuli.
Project/Area Number |
07557108
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 試験 |
Research Field |
Otorhinolaryngology
|
Research Institution | UNIVERSITY OF TOKYO |
Principal Investigator |
KIRITANI Shigeru UNIVERSITY OF TOKYO Graduate School of Med., Professor, 大学院・医学系研究科, 教授 (90010032)
|
Co-Investigator(Kenkyū-buntansha) |
UMOTO Masato UNIVERSITY OF TOKYO Graduate School of Med., Res.Associate, 医学部・附属病院, 助手 (30240170)
MORI Koichi UNIVERSITY OF TOKYO Graduate School of Med., Res.Associate, 大学院・医学系研究科, 助手 (60157857)
IMAIZUMI Satoshi UNIVERSITY OF TOKYO Graduate School of Med., Associate Prof., 大学院・医学系研究科, 助教授 (80122018)
NIIMI Seiji UNIVERSITY OF TOKYO Graduate School of Med., Professor, 大学院・医学系研究科, 教授 (00010273)
|
Project Period (FY) |
1995 – 1996
|
Project Status |
Completed (Fiscal Year 1996)
|
Budget Amount *help |
¥4,500,000 (Direct Cost: ¥4,500,000)
Fiscal Year 1996: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 1995: ¥3,500,000 (Direct Cost: ¥3,500,000)
|
Keywords | Speech / MEG / ERP / Audio-Visual / Brain Activity / 誘発脳波 / 視聴覚統合反応 / 意味判断課題 / 語い判断課題 / 音韻判断課題 / McGurk効果 |
Research Abstract |
The present study aims at establishing a system for recording brain activities (ERP or MEG) for the semantic processing of word meaning. The basic 300 concrete nouns were selected and for these words digitized speech data image data of the characters (Kanji and Hirakana) and the picture data were prepared. A system was constructed for simultareons presentation of audio-visual stimuli with regnired timing control. A set of dynamic video stimuli with antificial timing manipulation of the audio and video signals were also prepared to examine the temporal process of the audio-visual integration. By using these stimuli, following preliminary experiments were conducted to compare the visual and auditory processing of the word meaning and also to analyze the process of audio-visual integration. 1.Comparison of categorical judgment for character and picture stimuli. The words for animal name were presented as standard stimmuli and the words for plant name were presented as rare stimuli. The task was to count the plant name. ERP for the picture stimuli shows characteristic positive shitt at about 400 msec VST. 2.Cpmparison of Audio-Visual mismatch for character and pocture stimuli. Audio and visual stimuli were presented simultaneonsly and mismatched samples were presented infrequently. There was a significant difference in the P 300 latency between charactera nd picture stimuli. 3.Analyzes of audio-visual speech integration. Neural processes relating to audio-visual cue binding were investigated by measuring the electro-magnetic neural activities detecting infrequent changes in the timing of audio-visual speech stimuli. The results suggest the existence of a mechanism which guarntees coherent phonetic categorization by binding parallel but spatio-temporally distributing processes in the auditory and visual cortices.
|
Report
(3 results)
Research Products
(18 results)