Project/Area Number |
07680391
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | Yamanashi University |
Principal Investigator |
SEKIGUCHI Yoshihiro Yamanashi University, Department of Computer Science, Professor, 工学部, 教授 (70020493)
|
Co-Investigator(Kenkyū-buntansha) |
SUZUKI Yoshimi Yamanashi University, FACULTY OF ENGINEERING,Research associate, 工学部, 助手 (20206551)
ARIIZUMI Hitoshi Yamanashi University, FACULTY OF ENGINEERING,assistant Professor, 工学部, 講師 (80020436)
KARASAWA Hiroshi Yamanashi University, FACULTY OF ENGINEERING associate Profesor, 工学部, 助教授 (90177618)
重永 実 中京大学, 情報科学部, 教授 (20020282)
|
Project Period (FY) |
1995 – 1997
|
Project Status |
Completed (Fiscal Year 1997)
|
Budget Amount *help |
¥2,400,000 (Direct Cost: ¥2,400,000)
Fiscal Year 1997: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 1996: ¥800,000 (Direct Cost: ¥800,000)
Fiscal Year 1995: ¥1,000,000 (Direct Cost: ¥1,000,000)
|
Keywords | Suprasegmental Features / Prosodic information / Cantex information / associative information / Discourse segmentation / Speech dialogue system / Speach synthesis by rule / emotive speech synthesis / 話題識別 / 指示詞 / 応答音声 / 指示詞の処理 / 感情音声 / 基本周波数 / 句境界情報 / 感情表現 |
Research Abstract |
The use of suprasegmental features were investigated in speech dialoge system. The suprasegmental features are the parameters which are concerned with long periods of speech, for examples, the prosodic information, the context information, the associative information between words, the discourse segmental information. The concrete studies are as follows, 1. We developed a method which could extract accurate fundamental frequency by the repeat of speech wave differentiation and integration. 2. We developed a method which could extract phrase boundaries in continuous speech by using prosodic information. 3. We developed a method which could judge the modifing relation between succesively spoken phrases by using prosodic information. 4. We developed a method which could express the associative information between words. 5. We developed a method which could predict following words by using associative information between words. 6. We developed a discourse segmentation method by automatic keywords extraction. 7. We developed some speech dialogue system in which we could use pronouns by using context information. 8. We developed a method which could synthesize several emotive speech by using prosodic information. 9. We developed three kinds of speech dialogue system and evaluated them. The results confirmed us that the suprasegmental features were useful to upgrade the performance of man-machine speech dialogue system.
|