1998 Fiscal Year Final Research Report Summary
Experimental analysis of auditory and gestural interface
Project/Area Number |
09610071
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
実験系心理学
|
Research Institution | Musashino Womens' University (1998) Tokyo Institute of Technology (1997) |
Principal Investigator |
HAMANO Takashi Musashino Womens' University, Faculty of Contemporary society, Assistant professor, 現代社会学部, 講師 (00262288)
|
Co-Investigator(Kenkyū-buntansha) |
KUSUMI Takashi Tokyo Institute of technology, Graduate school of Decision Science and Technolog, 大学院・社会理工学研究科, 助教授 (70195444)
|
Project Period (FY) |
1996 – 1998
|
Keywords | interface / gesuture / auditory |
Research Abstract |
The purpose of this research is to examine the usability of multimodal interface using systematic musical tones and gesture. In experiment 1, three participantsmatched the sound stimulus on three elements of musical tones ; pitch, rhythm, and the direction of the sound. The result showed that the accuracy of matching was the highest in the pitch, the second in rhythm, and the last in the direction. The order of matching is (1) rhythm, (2) pitch and (3) direction. The combinations of the elements are also discussed. With this result, we can make auditory interfaces that lead users' attention to the most urgent information, and enable it to be received with the necessary accuracy. In experiment 2 and 3, an auditory condition that uses syncbronic constant tones keep presented multi-dimensional information was also examined. Two experiments were conducted with this condition, in maze-navigation task (experiment 2) and a mobile PC simulator (experiment 3). Under this condition, the performing time was shorter than no-sound condition (experiment 2). The frequency of operating errors was less when participants used the system the second time (experiment 3).The result indicates that subjects' learning about the systems are promoted by this auditory condition. In experiment 4 and 5, 24 participants operated a simulator of data base. Under gestural command condition, the performing time was shorter than pointing and short-cut key conditions. Finally, the implications of these results for auditory and gestural interfaces were discussed.
|