Project/Area Number |
25240026
|
Research Category |
Grant-in-Aid for Scientific Research (A)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Perceptual information processing
|
Research Institution | Japan Advanced Institute of Science and Technology |
Principal Investigator |
Akagi Masato 北陸先端科学技術大学院大学, 先端科学技術研究科, 教授 (20242571)
|
Co-Investigator(Kenkyū-buntansha) |
田中 宏和 北陸先端科学技術大学院大学, 情報科学研究科, 准教授 (00332320)
鵜木 祐史 北陸先端科学技術大学院大学, 情報科学研究科, 准教授 (00343187)
末光 厚夫 北陸先端科学技術大学院大学, 情報科学研究科, 助教 (20422199)
宮内 良太 北陸先端科学技術大学院大学, 情報科学研究科, 助教 (30455852)
北村 達也 甲南大学, 知能情報学部, 教授 (60293594)
川本 真一 群馬工業高等専門学校, 電子情報工学科, 講師 (70418507)
齋藤 毅 金沢大学, 電子情報学系, 助教 (70446962)
森川 大輔 北陸先端科学技術大学院大学, 情報科学研究科, 助教 (70709146)
Erickson Donna (ERICKSON Donna) 金沢医科大学, 教養部, 非常勤講師 (80331586)
党 建武 北陸先端科学技術大学院大学, 情報科学研究科, 教授 (80334796)
榊原 健一 北海道医療大学, 心理科学部, 准教授 (80396168)
|
Project Period (FY) |
2013-04-01 – 2017-03-31
|
Project Status |
Completed (Fiscal Year 2016)
|
Budget Amount *help |
¥47,060,000 (Direct Cost: ¥36,200,000、Indirect Cost: ¥10,860,000)
Fiscal Year 2016: ¥10,140,000 (Direct Cost: ¥7,800,000、Indirect Cost: ¥2,340,000)
Fiscal Year 2015: ¥11,570,000 (Direct Cost: ¥8,900,000、Indirect Cost: ¥2,670,000)
Fiscal Year 2014: ¥12,090,000 (Direct Cost: ¥9,300,000、Indirect Cost: ¥2,790,000)
Fiscal Year 2013: ¥13,260,000 (Direct Cost: ¥10,200,000、Indirect Cost: ¥3,060,000)
|
Keywords | 音声情報処理 / 音声合成 / 音声知覚 / 音声生成 / 非言語情報 |
Outline of Final Research Achievements |
Aiming to realize an expressive speech synthesis system, we build a story teller system. In other words, it is a project to realize a computer that plays various roles with their appropriate voice qualities. In existing methods such as HMM speech synthesis, since speech dependent on large-scale speech database to be learned is synthesized, large-scale databases are required to synthesize speech with various speech styles and individualities, in playing various roles. In order to overcome this, we devise a model for human speech production (voice production simulator) that faithfully reflects human speech production mechanism. Elucidating the human speech production method which can selectively use various voice qualities, we try to apply it to the synthesis of individual voice by implementing it into the model.
|