• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Study on a method of recovery from user's error in a multimodal information environment

Research Project

Project/Area Number 13680407
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field 計算機科学
Research InstitutionUniversity of Yamanashi

Principal Investigator

IMAMIYA Atsumi  Univ. of Yamanashi, Interdisciplinary Graduate School of Medicine and Engineering, Professor, 大学院・医学工学総合研究部, 教授 (40006276)

Co-Investigator(Kenkyū-buntansha) GO Kentaro  Univ. of Yamanashi, Integrated Information Processing Center, Associate Professor, 総合情報処理センター, 助教授 (50282009)
Project Period (FY) 2001 – 2003
Project Status Completed (Fiscal Year 2003)
Budget Amount *help
¥3,600,000 (Direct Cost: ¥3,600,000)
Fiscal Year 2003: ¥700,000 (Direct Cost: ¥700,000)
Fiscal Year 2002: ¥1,400,000 (Direct Cost: ¥1,400,000)
Fiscal Year 2001: ¥1,500,000 (Direct Cost: ¥1,500,000)
KeywordsMultimodal Interface / Recovery from User' Error / Gaze / Visual Retrieval / Eyesight Input / Speech / Undo / 履歴 / 複合現実空間 / 視覚的探索 / 遠隔カメラ制御 / 両手操作
Research Abstract

Multimodal system has the potential to greatly improve the flexibility, robustness, efficiency, universal accessibility and naturalness of human-machine interaction. This study investigated two multimodal techniques related with the integration of speech and eyesight, because humans naturally use these two modalities to communicate with each other.
The first study was about the gaze and mouse multimodal user interface. The eyesight naturally indicates one's attentions and interests, and the eye movement is rapid, so the eye gaze information can provide a quick, natural and convenient input method. In order to improve the accuracy of the gaze input method, a gaze and mouse multimodal complementary method was proposed. In this method, gaze modality was used to improve speed by selecting directly or shortening a moving distance of mouse, and the mouse was used to improve accuracy when the gaze fixation was far away from a target.
The second study was about gaze and speech multimodal input m … More ethodologies. We use these two modalities naturally and simultaneously in our daily life especially when determining deictic referents in a spoken dialogue. However, the recognition ambiguities of speech and gaze inputs are inevitable. Since both gaze and speech were error-prone modalities as a stand-alone, the goal of this study was to build an effective and robust human computer interaction system through these modalities.
The features of the speech and gaze multimodal system are as follows:
・The multimodal architecture can support the mutual correction of recognition errors from component modalities. Speech recognition errors can be corrected by gaze, and vice versa. Even if both gaze and speech recognition errors occur, the correct multimodal result can be obtained.
・Ambiguities in the speech signal can be resolved by gaze information. The multimodal architecture eliminates the need for the lengthy definite descriptions that would be necessary for unnamed objects if only speech is used. Thus, gaze information significantly contributes to simplifying the user's speaking. Simplified speech causes less
・recognition errors, and facilitates both error avoidance and user's acceptance, as well as provides a natural and intuitive way to interact with the computer.
・The simplified speech contributes to improving interaction speed, and provides users with an efficient multimodal interface. Less

Report

(4 results)
  • 2003 Annual Research Report   Final Research Report Summary
  • 2002 Annual Research Report
  • 2001 Annual Research Report
  • Research Products

    (21 results)

All Other

All Publications (21 results)

  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao: "Designing a Robust Speech and Gaze Multimodal System for Diverse Users"Proceedings of the 2003 IEEE International Conference on Information Reuse and Integration. 354-361 (2003)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao: "Overriding Errors in a Speech and Gaze Multimodal Architecture"ACM Proceedings of the 2004 International Conference on Intelligent User Interfaces. 346-348 (2004)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao: "A Gaze and Speech Multimodal Interface"Proceedings of the 6th International Workshop on Multimedia Network Systems and Applications, IEEE Computer Society. (発表予定). (2004)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao: "Resolving Ambiguities of a Gaze and Speech Interface"Proceedings of the symposium on ACM ETRA 2004 : Eye Tracking Research and Applications.. (発表予定). (2004)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Q.Zhang, A.Imamiya, Kentaro Go: "Text Entry Application Based on Gaze Pointing"7th ERCIM WORKSHOP USER ON INTERFACES FOR ALL. 87-102 (2002)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go.: "Text Entry Application Based on Gaze Pointing."Proceedings of the 7th ERCIM Workshop on "User Interfaces For All", Paris. France, October. 87-102 (2002)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao.: "Designing a Robust Speech and Gaze Multimodal System for Diverse Users."Proceedings of the 2003 IEEE International Conference on Information Reuse and Integration (IRI2003), Las Vegas, Nevada, USA, October. 354-361 (2003)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao.: "Overriding Errors in a Speech and Gaze Multimodal Architecture."Proceedings of the 2004 International Conference on Intelligent User Interfaces (IUI2004), Funchal, Madeira, Portugal, January. 346-348 (2004)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao.: "Resolving Ambiguities of a Gaze and Speech Interface"Proceedings of the symposium on ACM ETRA 2004: Eye Tracking Research and Applications, San Antonio, TX, USA, March. (Accepted for publication. (2004)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao.: "A Gaze and Speech Multimodal Interface."Proceedings of the 6th International Workshop on Multimedia Network Systems and Applications (MNSA2004), IEEE Computer Society, Tokyo, Japan, March. (Accepted for publication). (2004)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2003 Final Research Report Summary
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao: "Designing a Robust Speech and Gaze Multimodal System for Diverse Users"Proceedings of the 2003 IEEE International Conference on Information Reuse and Integration. 354-361 (2003)

    • Related Report
      2003 Annual Research Report
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao: "Overriding Errors in a Speech and Gaze Multimodal Architecture"ACM Proceedings of the 2004 International Conference on Intelligent User Interfaces. 346-348 (2004)

    • Related Report
      2003 Annual Research Report
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao: "A Gaze and Speech Multimodal Interface"Proceedings of the 6th International Workshop on Multimedia Network Systems and Applications, IEEE Computer Society. (発表予定). (2004)

    • Related Report
      2003 Annual Research Report
  • [Publications] Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao: "Resolving Ambiguities of a Gaze and Speech Interface"Proceedings of the symposium on ACM ETRA 2004 : Eye Tracking Research and Applications.. (発表予定). (2004)

    • Related Report
      2003 Annual Research Report
  • [Publications] Hisanori Masuda, Kentaro Go, Atsumi Imamiya: "Effects of grouping thumbnail images in visual search"HCI International 2003. (発表予定). (2003)

    • Related Report
      2002 Annual Research Report
  • [Publications] 増田尚則, 今宮淳美: "Undo機能をもつグラフィカル履歴ブラウザ設計と視覚的探索分析"電子情報通信学会 論文誌 D-I. Vol.1 J85-D-I No.8. 798-810 (2002)

    • Related Report
      2002 Annual Research Report
  • [Publications] Masuda, H.K.Ichikawa, K.Go A.Imamiya: "Measuring eye movements and mouse-pointing pattern using thumbnail images"Measuring Behavior 2002. 116-168 (2002)

    • Related Report
      2002 Annual Research Report
  • [Publications] Q.Zhang, A.Imamiya, Kentaro Go: "Text Entry Application Based on Gaze Pointing"7th ERCIM WORKSHOP USER INTERFACES FOR ALL. 87-102 (2002)

    • Related Report
      2002 Annual Research Report
  • [Publications] M.Omata, K.Go, A.Imamiya: "An Information Presentation Method in the Augmented Reality World Using a Twist between User's Head and Body"ACM Proceedings of International Workshop on Immersive Telepresence. 44-47 (2002)

    • Related Report
      2002 Annual Research Report
  • [Publications] 郷健太郎, 伊藤 雅広, 今宮淳美: "ズーム情報を利用した適応型遠隔カメラ制御法"情報処理学会論文誌. 43・2. 585-592 (2002)

    • Related Report
      2001 Annual Research Report
  • [Publications] 増田尚則, 今宮淳美: "Undo機能をもつグラフィカル履歴ブラウザ設計と視覚的探索分析"電子情報通信学会 論文誌 D 採録決定. (2002)

    • Related Report
      2001 Annual Research Report

URL: 

Published: 2001-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi