Study on a personal digital assistance for the visually impaired using tactile and virtual sound interface
Project/Area Number |
15500063
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Media informatics/Database
|
Research Institution | Shinshu University |
Principal Investigator |
ITOH Kazunori Shinshu University, Engineering, Associate Professor, 工学部, 助教授 (30043045)
|
Project Period (FY) |
2003 – 2004
|
Project Status |
Completed (Fiscal Year 2004)
|
Budget Amount *help |
¥3,300,000 (Direct Cost: ¥3,300,000)
Fiscal Year 2004: ¥700,000 (Direct Cost: ¥700,000)
Fiscal Year 2003: ¥2,600,000 (Direct Cost: ¥2,600,000)
|
Keywords | Virtual sound screen / HRTF / Tablet / Tactile input guide / PDA for the visually impaired / Sound localization |
Research Abstract |
PC interfaces are created based on GUIs which use a pointing device. However it is very hard for those who have visual disabilities to use. The reason that visually impaired persons use PCs is mainly to use the Internet, to access a newspaper publishing company on the Web to read the latest articles of a newspaper, and to contact friends using E-mail. However, these tasks can be done by using a screen reader, e.g.. In order to use the Internet, a new PC which processes a lot of information at high speed is desired. Generally they are GUI machines like Windows, so visually impaired persons will also use a GUI PC. The GUI layout plays an important role in the expression of information or the operation of the PC, but a screen reader which reads characters sequentially does not take into account the GUI layout at all. Many visually impaired users have a desire to recognize the layout. In this study, we show how to use a new localized sound system as a support system to recognize the layout
… More
of GUI objects. These localized sounds that correspond to the GUI objects are created using by HRTFs in the median plane and interaural differences between both ears in the horizontal plane. The filtering is due to reflections and diffractions from the human torso, head and pinna, and is described by HRTFs. Binaural synthesis works extremely well when the listener's own HRTFs are used to synthesize the sound localization cues. The computational simulation is made by means of binaural synthesis using HRTFs and sound presented by headphones. We found that the sound obtained by sampling in the vertical direction using white noise from a sound source near the pinna was easy to locate. As a simple model of a GUI layout, a 25 button layout with a localized sound added to each button is proposed, and the effectiveness of the localized sounds for grasping the position of the buttons is examined. After much consideration, we found that the virtual sound screen could be applied to recognize the layout of GUI objects in addition to leading a mouse cursor. Also, we examine whether or not a visually impaired person can make a formal basic figure by himself. The patterns that we treat in this report are restricted to a collection of lines. In order to make formal patterns, we prepared an acrylic pen input guide and put it on a tablet. This guide has a 9 by 9 lattice of holes which correspond to 9 by 9 localized sounds on a 2-dimensional virtual sound screen. Using this guide and a stylus pen, the visually impaired can make a pattern. The input pattern is checked and corrected by hearing the localized sound. Experiments show that the subjects can input simple patterns correctly. Less
|
Report
(3 results)
Research Products
(11 results)