• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Experimental analysis of auditory and gestural interface

Research Project

Project/Area Number 09610071
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field 実験系心理学
Research InstitutionMusashino Womens' University (1998)
Tokyo Institute of Technology (1997)

Principal Investigator

HAMANO Takashi  Musashino Womens' University, Faculty of Contemporary society, Assistant professor, 現代社会学部, 講師 (00262288)

Co-Investigator(Kenkyū-buntansha) KUSUMI Takashi  Tokyo Institute of technology, Graduate school of Decision Science and Technolog, 大学院・社会理工学研究科, 助教授 (70195444)
牟田 博光  東京工業大学, 大学院・社会理工学研究科, 教授 (70090925)
松田 稔樹  東京工業大学, 大学院・社会理工学研究科, 助教授 (60173845)
中川 正宣  東京工業大学, 大学院・社会理工学研究科, 教授 (40155685)
Project Period (FY) 1996 – 1998
Project Status Completed (Fiscal Year 1998)
Budget Amount *help
¥1,700,000 (Direct Cost: ¥1,700,000)
Fiscal Year 1998: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 1997: ¥1,100,000 (Direct Cost: ¥1,100,000)
Keywordsinterface / gesuture / auditory / ユーザインタフェース / インターフェース / メタファ
Research Abstract

The purpose of this research is to examine the usability of multimodal interface using systematic musical tones and gesture. In experiment 1, three participantsmatched the sound stimulus on three elements of musical tones ; pitch, rhythm, and the direction of the sound. The result showed that the accuracy of matching was the highest in the pitch, the second in rhythm, and the last in the direction. The order of matching is (1) rhythm, (2) pitch and (3) direction. The combinations of the elements are also discussed. With this result, we can make auditory interfaces that lead users' attention to the most urgent information, and enable it to be received with the necessary accuracy. In experiment 2 and 3, an auditory condition that uses syncbronic constant tones keep presented multi-dimensional information was also examined. Two experiments were conducted with this condition, in maze-navigation task (experiment 2) and a mobile PC simulator (experiment 3). Under this condition, the performing time was shorter than no-sound condition (experiment 2). The frequency of operating errors was less when participants used the system the second time (experiment 3).The result indicates that subjects' learning about the systems are promoted by this auditory condition. In experiment 4 and 5, 24 participants operated a simulator of data base. Under gestural command condition, the performing time was shorter than pointing and short-cut key conditions. Finally, the implications of these results for auditory and gestural interfaces were discussed.

Report

(3 results)
  • 1998 Annual Research Report   Final Research Report Summary
  • 1997 Annual Research Report

URL: 

Published: 1997-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi