• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

1999 Fiscal Year Final Research Report Summary

Construction of Prototype Multi-modal Interface by Lifelike Agent

Research Project

Project/Area Number 09555119
Research Category

Grant-in-Aid for Scientific Research (B)

Allocation TypeSingle-year Grants
Section展開研究
Research Field 情報通信工学
Research InstitutionSeikei University

Principal Investigator

MORISHIMA Shigeo  Faculty of Engineering, Seikei Univ., Professor, 工学部, 助教授 (10200411)

Co-Investigator(Kenkyū-buntansha) YAMADA Hiroshi  Nihon University, Associate Professor, 文理学部, 助教授 (80191328)
Project Period (FY) 1997 – 1999
KeywordsKansei Information / Face Image Processing / Cyberspace / Agent / Lip Synchronization / Wire Frame Model / Expression Synthesis / Expression Analysis
Research Abstract

Prototype communication system between multiple clients in face-to-face style is constructed. Lifelike agent appears in cyberspace driven by other client interactively. This system is composed of one server and multiple clients, and each client has a camera and microphone. Voice captured from microphone is transmitted to server frame by frame. And then voice signal is analyzed and converted to mouth shape parameters by neural network. This mouth shape parameter is transmitted to each client through network and, expression and lip movement of lifelike agent copying other side client are controlled by this parameter. Voice signal is also transmitted to each client and played back by speaker at each client system synchronizing with synthesized face image. Basic expression can be selected by pushing function key to change the face image of lifelike agent displayed at other clients. Two mode is prepared in this system ; walk through mode and fly through mode. In the walk through mode, each user can communicate by eye contact with others and change the location and direction of agent eye. In the fly mode, user can be an observer in communication.
By using this prototype communication system, the experiment in which three users communicate each other is performed. Synthesis rate is 10 frames per second. And natural communication environment can be achieved.

  • Research Products

    (8 results)

All Other

All Publications (8 results)

  • [Publications] 森島繁生,八木康史: "顔の認識・合成のための標準ツール"システム/制御/情報. 44・3. 119-126 (2000)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] S.Morishima, T.Yotsukura: "Face-to-face Communication Avatar Driven by Voice"Proceedings of IEEE ICIP. 27AS1-3 (1999)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] Shigeo Morishima: "Real-time Voice Driven Facial Animation System"Proceedings of IEEE ICSMC. FP13-5 (1999)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] 寺田員人,森島繁生 他: "コンピュータグラフィックスを用いた矯正治療による表情変化"歯科審美. 12・1. 37-51 (1999)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] 四倉達夫,藤井英史,森島繁生: "サイバースペース上の仮想人物による実時間対話システムの構築"情報処理学会論文誌. 40・2. 677-686 (1999)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] S.Morishima, T.Yotsura: "Face-to-Face Communication Aratar Driven by Voice"Proceedings of IEEE ICIP. 27AS1-3 (1999)

    • Description
      「研究成果報告書概要(欧文)」より
  • [Publications] Shigeo Morishima: "Real-time Voice Driver Facial Animation System"Proceedings of IEEE ICSMC. FP13-5 (1999)

    • Description
      「研究成果報告書概要(欧文)」より
  • [Publications] T.Yotsukura, Ei Fujii, S.Morishima: "Realtime Face-to-face Communication System in Cyberspace Using Voice Driver Aratar with Texture Mapped Face"Transaction of Information Processing Society of Japan. Vol.40. No.2. 677-686 (1999)

    • Description
      「研究成果報告書概要(欧文)」より

URL: 

Published: 2002-03-26  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi