2006 Fiscal Year Final Research Report Summary
A study on interaction between production and perception in speech communication
Project/Area Number |
16300053
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Japan Advanced Institute of Science and Technology |
Principal Investigator |
AKAGI Masato JAIST, School of Info. Sci., Professor, 情報科学研究科, 教授 (20242571)
|
Co-Investigator(Kenkyū-buntansha) |
TOU Takeshi JAIST, School of Info. Sci., Professor, 情報科学研究科, 教授 (80334796)
UNOKI Masashi JAIST, School of Info. Sci., Assoc. Professor, 情報科学研究科, 助教授 (00343187)
LU Xugang JAIST, School of Info. Sci., Associate, 情報科学研究科, 助手 (20362022)
|
Project Period (FY) |
2004 – 2006
|
Keywords | speech perception / speech production / transformed auditory feedback (TAF) / formant / electromyographic (EMG) signal / face image / articulation / tongue movement |
Research Abstract |
This study employed an auditory feedback paradigm with perturbed fed-back speech to investigate interaction between speech perception and production by measuring simultaneous fluctuations of speech production organs using the electromyographic (EMG) signals, articulatory movements, as well as spectral analyses, where the articulatory data were obtained by the electromagnetic articulographic (EMA) system. Chinese vowels pair [i]-[y] and Japanese vowels pairs [e]-[a], [e]-[i] and [e]-[u] were chosen as the experimental objects. When the speaker is maintaining the first vowel, the feedback sound is randomly changed from the first vowel to the second one in each pair by manipulating the first three formants. Spectral analysis showed that a clear compensation was seen in the first and second formants of the vowels. Analyses of EMG and EMA signals also showed muscle reactivation and tongue movements to compensate for the perturbations. Latency of the compensating response is about 150 ms to start and about 290 ms for maximum compensation from the onset of the perturbation. According to the measurements, it seems that in most cases the speaker attempts to compensate for the "error" caused by the auditory perturbation by a real-time monitoring, and the auditory feedback takes place simultaneously often during speech production.
|
Research Products
(6 results)
-
-
-
-
-
-
[Journal Article] (2006). "Investigation of interaction between speech perception and production using auditory feedback,"
Author(s)
Akagi, M., Dang, J., Lu, X., Uchiyamada, T.
-
Journal Title
J. Acoust. Soc. Am. 120, 5
Pages: t. 2, 3253
Description
「研究成果報告書概要(欧文)」より