2019 Fiscal Year Final Research Report
Advancement of language acquisition by predictive segmentation of speech
Project/Area Number |
16K12449
|
Research Category |
Grant-in-Aid for Challenging Exploratory Research
|
Allocation Type | Multi-year Fund |
Research Field |
Cognitive science
|
Research Institution | Tokyo Metropolitan University |
Principal Investigator |
Homae Fumitaka 首都大学東京, 人文科学研究科, 准教授 (20533417)
|
Project Period (FY) |
2016-04-01 – 2020-03-31
|
Keywords | 認知神経科学 / 発達脳科学 / 音声知覚 / 語彙 / 視線計測 / 脳波 |
Outline of Final Research Achievements |
The purpose of the present study is to examine how infants perceive the speech in their native language, and to test the hypothesis that they might be listening while predicting the appearance of clauses and sentence endings in advance. We investigated the number of words acquired using a language development questionnaire, and simultaneously measured the gaze position and EEG when an infant watched the video of the speaker's face while listening to the voice. The results showed that infants aged 6 to 22 months were more likely to watch the video with a focus on the speaker's mouth than her eyes, and that this tendency was related to the age and the number of words acquired. We also found that there is a difference in the appearance of EEG between the mother-tongue voice and the voice with reduced features of the mother tongue. Based on the results obtained, we discussed the role of speech perception in language acquisition.
|
Free Research Field |
発達脳科学
|
Academic Significance and Societal Importance of the Research Achievements |
本研究では、語彙数が急速に増加する2歳になるまでの期間において、母語の音声を聞く際に目よりも口を見る傾向が高まり続けることを初めて明らかにした。また、6から22か月齢の広い月齢を通して、音声を時間的に逆にして再生した場合に、順再生よりも瞳孔径が大きくなり、順再生と逆再生とでは音声提示開始から2000ミリ秒ほど経過した後の脳波に違いが現れることも見いだした。これらの結果は、母語の音声に含まれる超分節的特徴の聴覚情報と口の動きを捉える視覚情報とを統合することが、単語の獲得を促進することを示唆している。
|