Project/Area Number |
13680948
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Biomedical engineering/Biological material science
|
Research Institution | Tokyo Metropolitan Institute of Technology |
Principal Investigator |
SEKIHARA Kensuke Tokyo Metropolitan Institute of Technology, Department of Engineering, Professor, 工学部, 教授 (40326020)
|
Project Period (FY) |
2001 – 2003
|
Project Status |
Completed (Fiscal Year 2003)
|
Budget Amount *help |
¥2,700,000 (Direct Cost: ¥2,700,000)
Fiscal Year 2003: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 2002: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 2001: ¥700,000 (Direct Cost: ¥700,000)
|
Keywords | Magnetoencephalography / Source reconstruction / Adaptive beamformer / Speech sound / Auditory-visual integration / 脳活動の無侵襲計測 / 聴覚・視覚野 / 空間フィルター / ビームフォーマー / 脳言語処理 / 聴覚野 / 脳磁場逆問題 / 適応フィルター / 心的辞書 |
Research Abstract |
We have proposed a novel algorithm for reconstructing spatio-temporal cortical activites from magnetoencephalographic (MEG) measurements. The algorithm is based on the adaptive beamforming developed in the field of array signal processing including radar, sonar and seismic exploration, and we have extended the beamforming technique to incorporate the nature of electromagnetic sources. We have analyzed factors that determine the quality of the final source reconstruction results, such factors include effects of background brain activities, spatial resolution, and background environmental noise. We have proposed a novel method for determining the source orientation in which the orientation is determined as that giving the maximum SNR. We have explored a method of evaluating the statistical significance of the reconstructed results. A method uses non-parametric statistics, and need not to rely on the Gaussianity assumption for the background activities. The computer. algorithm developed is planned to be publicized in near future. We have applied the proposed algorithm to speech-sound-elicited neuromagnetic recordings. This experiments were also related to well-known Mcgurk effects, and to auditory-visual integration. The developed algorithm can reveal where the auditory and visual integration takes place and how the spoken language is processed in our brain. The results of this experiments will be published in a future.
|