2003 Fiscal Year Final Research Report Summary
Spatio-temporal reconstruction of brain activity and its application to measure language-related brain activities
Project/Area Number |
13680948
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Biomedical engineering/Biological material science
|
Research Institution | Tokyo Metropolitan Institute of Technology |
Principal Investigator |
SEKIHARA Kensuke Tokyo Metropolitan Institute of Technology, Department of Engineering, Professor, 工学部, 教授 (40326020)
|
Project Period (FY) |
2001 – 2003
|
Keywords | Magnetoencephalography / Source reconstruction / Adaptive beamformer / Speech sound / Auditory-visual integration |
Research Abstract |
We have proposed a novel algorithm for reconstructing spatio-temporal cortical activites from magnetoencephalographic (MEG) measurements. The algorithm is based on the adaptive beamforming developed in the field of array signal processing including radar, sonar and seismic exploration, and we have extended the beamforming technique to incorporate the nature of electromagnetic sources. We have analyzed factors that determine the quality of the final source reconstruction results, such factors include effects of background brain activities, spatial resolution, and background environmental noise. We have proposed a novel method for determining the source orientation in which the orientation is determined as that giving the maximum SNR. We have explored a method of evaluating the statistical significance of the reconstructed results. A method uses non-parametric statistics, and need not to rely on the Gaussianity assumption for the background activities. The computer. algorithm developed is planned to be publicized in near future. We have applied the proposed algorithm to speech-sound-elicited neuromagnetic recordings. This experiments were also related to well-known Mcgurk effects, and to auditory-visual integration. The developed algorithm can reveal where the auditory and visual integration takes place and how the spoken language is processed in our brain. The results of this experiments will be published in a future.
|