• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Individualization of Multimodal Expression for Anthropomorphic and Embodied Media

Research Project

Project/Area Number 20700106
Research Category

Grant-in-Aid for Young Scientists (B)

Allocation TypeSingle-year Grants
Research Field Media informatics/Database
Research InstitutionAdvanced Telecommunications Research Institute International

Principal Investigator

YONEZAWA Tomoko  Advanced Telecommunications Research Institute International, 知能ロボティクス研究所, 研究員 (90395161)

Project Period (FY) 2008 – 2010
Project Status Completed (Fiscal Year 2010)
Budget Amount *help
¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2010: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2009: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2008: ¥1,950,000 (Direct Cost: ¥1,500,000、Indirect Cost: ¥450,000)
Keywords擬人的媒体の表現 / 日常挨拶マルチモーダル表現 / 性差 / 年齢差 / 個人差 / 個人内表現差 / 個人間表現差 / 表現のモーフィング / 性差・年齢差による個人差 / 個人内・個人間表現差 / 表現の中点 / 話しかけ意図行動 / 日本挨拶マルチモーダル表現 / 音声表現の非線形知覚
Research Abstract

Robots and virtual agents have each own (given) character for each product using their original multimodal data set of voices and motions. However, their given characters sometimes interrupt various expressions for communicative purposes. In this research theme, we have proposed a multimodal expression capturing method for individualization of the anthropomorphic embodied media.
Multimodal-FONT is the name of the database which this project has captured. As expressed in the name, each individual data of the multimodal expressions for daily greeting was captured with varied age groups and genders. Finally we could gather over 30 persons' multimodal expressions : voice, head and upper-body gestures, and directions of the gaze at the same time. The recorded data are analyzed using the prosody of the voice, motion characters, and gaze patterns, and variation widths of the whole expressions. As results, we could confirm that there are some kinds of expression patterns relevant or irrelevant to their gender groups or age groups.

Report

(4 results)
  • 2010 Annual Research Report   Final Research Report ( PDF )
  • 2009 Annual Research Report
  • 2008 Annual Research Report
  • Research Products

    (12 results)

All 2011 2009 2008 Other

All Journal Article (2 results) (of which Peer Reviewed: 2 results) Presentation (10 results)

  • [Journal Article] Anthropomorphic awareness of partner robot to user's situation based on gaze and speech detection

    • Author(s)
      Tomoko Yonezawa, Hirotake Yamazoe, Akira Utsumi, Shinji Abe
    • Journal Title

      IJAACS (International Journal of Autonomous and Adaptive Communications Systems), Inderscience in press

    • Related Report
      2010 Final Research Report
    • Peer Reviewed
  • [Journal Article] Anthropomorphic awareness of partner robot to user's situation based on gaze and speech detection

    • Author(s)
      Tomoko Yonezawa, 他
    • Journal Title

      International Journal of Autonomous andAdaptive Communications Systems (In press.)

    • Related Report
      2009 Annual Research Report
    • Peer Reviewed
  • [Presentation] マルチモーダルフォント:日常表現における個人性データの取得に関する報告2011

    • Author(s)
      米澤朋子
    • Organizer
      ヒューマンインタフェース学会研究会
    • Place of Presentation
      京都産業大学
    • Year and Date
      2011-03-08
    • Related Report
      2010 Annual Research Report
  • [Presentation] マルチモーダルフォント:日常表現における個人性データの取得に関する報告2011

    • Author(s)
      米澤〓子
    • Organizer
      ヒューマンインタフェース学会第71回研究会, SIG-DE-05(pp.53-56)
    • Related Report
      2010 Final Research Report
  • [Presentation] ユーザの視線.発声に対するクロスモーダルアウェアネス~ユーザ状況把握を表すロボット(査読有)2009

    • Author(s)
      米澤朋子, 他
    • Organizer
      HAI2009 [Impressive Experience Award受賞]
    • Place of Presentation
      東京
    • Year and Date
      2009-12-04
    • Related Report
      2009 Annual Research Report
  • [Presentation] インタ7エースの発想と展開~メディア・デバイス・視線コミュニケーションの応用研究の紹介(招待講演)2009

    • Author(s)
      米澤朋子
    • Organizer
      慶應義塾大学 情報工学特別講義
    • Place of Presentation
      横浜
    • Year and Date
      2009-06-12
    • Related Report
      2009 Annual Research Report
  • [Presentation] Evaluating Crossmodal Awareness of Daily-partner Robot to User's Behaviors with Gaze and Utterance Detection2009

    • Author(s)
      Tomoko Yonezawa, 他
    • Organizer
      CASEMANS2009 [Best Paper Award受賞]
    • Place of Presentation
      Nara, JAPAN
    • Year and Date
      2009-05-11
    • Related Report
      2009 Annual Research Report
  • [Presentation] ユーザの視線・発声に対するクロスモダリティ適応型のロボット行動設計2009

    • Author(s)
      米澤〓子, 山添大丈, 内海章, 安部伸治
    • Organizer
      HI学会SIGDE研究会(no.56, pp.1-6)
    • Related Report
      2010 Final Research Report
  • [Presentation] Evaluating Crossmodal Awareness of Daily-partner Robot to User's Behaviors with Gaze and Utterance Detection2009

    • Author(s)
      Tomoko Yonezawa, Hirotake Yamazoe, Akira Utsumi, Shinji Abe
    • Organizer
      CASEMANS2009(pp.1-8)(Best Paper Award)
    • Related Report
      2010 Final Research Report
  • [Presentation] ユーザの視線・発声に対するクロスモーダルアウェアネス ~ユーザ状況把握を表すロボット2009

    • Author(s)
      米澤〓子, 山添大丈, (1T-2のみ神山祐一), 内海章, 安部伸治
    • Organizer
      HAIシンポジウム2009,口頭発表2C-3および体験デモ1T-2(Impressive Experience Award 受賞)
    • Related Report
      2010 Final Research Report
  • [Presentation] 音声モーフィングによる歌声の声色強度変化の知覚特性の分析2008

    • Author(s)
      米澤朋子, 他
    • Organizer
      情報処理学会MUS研究会
    • Place of Presentation
      神戸
    • Year and Date
      2008-05-21
    • Related Report
      2008 Annual Research Report
  • [Presentation] 音声モーフィングによる歌声の声色強度変化の知覚特性の分析2008

    • Author(s)
      米澤〓子, 鈴木紀子, 安部伸治, 間瀬健二, 小暮潔
    • Organizer
      情報処理学会研究会, SIG HCI-MUS (2008-05)(pp.25-30)
    • Related Report
      2010 Final Research Report

URL: 

Published: 2008-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi