• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Studies on analysis and synthesis of an atmosphere in communicative situations

Research Project

Project/Area Number 11480086
Research Category

Grant-in-Aid for Scientific Research (B)

Allocation TypeSingle-year Grants
Section一般
Research Field 情報システム学(含情報図書館学)
Research InstitutionKyoto Institute of Technology

Principal Investigator

NIIMI Yasuhisa  Kyoto Institute of Technology, Faculty of Eng. and Design, Professor, 工芸学部, 教授 (00026030)

Co-Investigator(Kenkyū-buntansha) NISHIMOTO Takuya  Kyoto Institute of Technology, Faculty of Eng. and Design, Assistant, 工芸学部, 助手 (80283696)
ARAKI Masahiro  Kyoto Institute of Technology, Faculty of Eng. and Design, Associate Professor, 工芸学部, 助教授 (50252490)
Project Period (FY) 1999 – 2001
Project Status Completed (Fiscal Year 2001)
Budget Amount *help
¥9,600,000 (Direct Cost: ¥9,600,000)
Fiscal Year 2001: ¥1,400,000 (Direct Cost: ¥1,400,000)
Fiscal Year 2000: ¥4,200,000 (Direct Cost: ¥4,200,000)
Fiscal Year 1999: ¥4,000,000 (Direct Cost: ¥4,000,000)
Keywordsmodeling of atmosphere / communicative situation / gesture agent / recogniton of emotional speech / synthesis of emotional speech / 感情音声に合成 / 擬人化エージェント / 非同期音声会議 / 音声中の雰囲気 / ニュー波 / 感情音声の合成 / ジェスチャー認識
Research Abstract

Some atmosphere is created in situations where multiple persons communicate each other. The atmosphere of such situations is quite important in activating the discussion or the meeting. What an atmosphere is created depends on (1) what is discussed there, (2) social positions of the participants, their roles therein, and their characteristics, and (3) their facial expressions, their tones of voice, and their physical behaviors. In this project we mainly concerned the third of these factors and studied the following three points.
1. Analysis of a conversation atmosphere
Situations where two persons talked spontaneously were recorded with a video camera. The timing of gestures and the intervals of utterances were annotated for each person, and analyzed on what differences in these factors were observed between where dialogs were activated and where dialogs were not activated. Utterances of two persons overlapped each other more frequently in the former situations than in the latter, and in the intervals of the overlaps directions of a glance frequently changed in the former while movements of bodies were frequently observed in the latter.
2. Development of a gesture agent
Physical behaviors of subjects who were asked to make a gesture of some psychological states such as "good humor", "convinced", and "interested" and their counterparts were observed with a simple motion-capture. The analysis of these date proved that combinations of movements of a head, a shoulder and hands could distinguish among these psychological states. A graphical gesture agent was designed based on this result.
3. Recognition and synthesis of emotional speech
Emotion is important to create a conversation atmosphere. So a prototype system was constructed to recognize emotions such as anger, fear, joy, and sadness included in speech and a speech synthesis system was built to produce emotional speech by which three emotions, anger, joy, and sadness can be synthesized.

Report

(4 results)
  • 2001 Annual Research Report   Final Research Report Summary
  • 2000 Annual Research Report
  • 1999 Annual Research Report
  • Research Products

    (17 results)

All Other

All Publications (17 results)

  • [Publications] T.Nishimoto: "An Asynchronous Virtual Meeting System for Bi-directional Speech Dialog"Proc. of European Conference on Speech Communication and Technology. 2464-2467 (1999)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] 西本 卓也: "非同期型音声会議システムAVMの設計と評価"電子情報通信学会論文誌D-II. J83-K-II 11. 2490-2497 (2000)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] 笠松 正紀: "適応素片を用いた感情音声の合成"電子情報通信学会 音声研究会技術報告 SP2000-165. (2002)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] Y.Niimi: "Synthesis of Emotional Speech Using Prosodically Balanced VCV Segments"Proc. of ESCA Workshop on Speech Synthesis (http://www.ssw4.org/proceedings.html). (2001)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] 伊藤 秀樹: "対話における雰囲気の分析"情報処理学会 音声言語情報処理研究会資料 SLP40-18. (2002)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] T. Nishimoto: "An asynchronous virtual meeting system for bi-directional speech dialog"Proc. of European Conf. on Speech Communication and Technology. 2464-2467 (1999)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] T. Nishimoto: "Design and evaluation of the asynchronous virtual meeting system AVM. (in Japanese)"IEICE Trans. D-II. J83-D-II-11. 2490-2497 (2000)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] M. Kasamatsu: "Synthesizing an emotional vioce using prosodic-balanced VCV Database (in Japanese)"Technical Report of IEICE, SP 2000-165. (2001)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] Y. Niimi: "Synthesis of emotional speech using prosodically balanced VCV segments"Proc. of ESCA Workshop on Speech Synthesis. (http://www.ssw4.org/proceedings.html). (2001)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] H. Itoh: "The analysis of the atomsphere in the dialog (in Japanese)"Technical Report of IPSJ, SLP 40-18. (2002)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2001 Final Research Report Summary
  • [Publications] Y.Niimi: "Synthesis of Emotional Speech Using Prosodically Balanced VCV Segments"Proc. of ESCA Workshop on Speech Synthesis (http://www.ssw4.org/proceedings.html). (2001)

    • Related Report
      2001 Annual Research Report
  • [Publications] 伊藤 秀樹: "対話における雰囲気の分析"情報処理学会 音声言語情報処理研究会資料 SLP 40-18. (2002)

    • Related Report
      2001 Annual Research Report
  • [Publications] 西本卓也: "非同期型音声会議システムAVMの設計と評価"電子情報通信学会論文誌D-II. J83-D-II. 2490-2497 (2000)

    • Related Report
      2000 Annual Research Report
  • [Publications] Y.Niimi: "A task-independent dialogue controller based on the extended frame-driven method"Proc.of Int.Conf.on Spoken Language Processing. 114-117 (2000)

    • Related Report
      2000 Annual Research Report
  • [Publications] 笠松正紀: "適応素片を用いた感情音声の合成"電子情報通信学会音声研究会技術報告. (2001)

    • Related Report
      2000 Annual Research Report
  • [Publications] Y.Niimi: "Mathematical Analysis of Dialogue Strategies"Proc.of European Conf.on Speech Commnunication and Technology. 1403-1406 (1999)

    • Related Report
      1999 Annual Research Report
  • [Publications] T.Nishimoto: "An Asynchronous Virtual Meeting System for Bi-directional Speech Dialog"Proc.of European Conf.on Speech Communication and Technology. 2464-2467 (1999)

    • Related Report
      1999 Annual Research Report

URL: 

Published: 1999-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi