• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Audio-Visual interaction in speech perception; a PET study

Research Project

Project/Area Number 13671764
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field Otorhinolaryngology
Research InstitutionTohoku University

Principal Investigator

KAWASE Tetsuaki  Tohoku University, Graduate School of Medicine, Department of Otolaryngology, Associate Professor, 大学院・医学系研究科, 助教授 (50169728)

Co-Investigator(Kenkyū-buntansha) YAMAGUCHI Keiichiro  Tohoku University, Cyclotron and Radioisotope Center, Research Associate, サイクロトロン・ラジオアイソトープセンター, 助手 (40210356)
SUZUKI Yoiti  Tohoku University, Research Institute of Electrical Communication, Professor, 電気通信研究所, 教授 (20143034)
Project Period (FY) 2001 – 2002
Project Status Completed (Fiscal Year 2002)
Budget Amount *help
¥3,600,000 (Direct Cost: ¥3,600,000)
Fiscal Year 2002: ¥1,500,000 (Direct Cost: ¥1,500,000)
Fiscal Year 2001: ¥2,100,000 (Direct Cost: ¥2,100,000)
KeywordsMutimodal processing / speech recognition / integration / McGurk effect / PET
Research Abstract

Human brain integrates information from multisensory modalities effectively during perceiving external signals. This mutimodal processing is beneficial for faster and more accurate recognition of information than unimodal processing. In speech perception, it is well known that visual information from speaker's face can be effectively utilixed in speech perception. Seeing consistent visual information (facial movement) in phonation can improve the perceptibility in hearing speech sound against background noise. Even when presented visual cue is inconsistent with auditory input, above-mentioned audio-visual intention can occur. In typical case, seeing incongruent phonatory gesture can modify auditory percept phonetically (McGurk effects). In the present study, we used PET to identify brain activations during the presentation of inconsistent audiovisual phonetic signals.
Results and Discussion
1. Behavioral measures All the subjects answered /be/ when the acoustic /be/ was presented togethe … More r with visual /be/. On the other hand, nine of 12 subjects were answered /de/ (auditory illusion), when the acoustic /be/ was presented with visual /ge/ (inconsistent condition).
2. Brain activation
For the nine subjects in whom auditory illusion were observed, PET analysis was conducted using the voxel based image subtraction. When the condition of A-V stimulation of /be/-/ge/ was compared with that of A-V stimulation of /be/(A)-/be/(V), or A-V stimulation of /be/-/VN/, activated areas were observed in the amvadala, the anteriorpart of the cingulate gyrus, and the caudate head (p>0.005 uncorrected).
Sekiyama et al (1994) indicated that many of subjects (especially in Japanese subjects) experienced a kind of unpleasant or strange feeling when they see the inconsistent A-V combination in McGurk stimulus. The activation of regions in the limbic system observed in the presentation of inconsistent A-V combination (/be/-/ge/) might be related to this emotional reaction for the unusual stimulus that is hardly experienced. Less

Report

(3 results)
  • 2002 Annual Research Report   Final Research Report Summary
  • 2001 Annual Research Report

URL: 

Published: 2001-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi