Project/Area Number |
14208028
|
Research Category |
Grant-in-Aid for Scientific Research (A)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | The University of Tokyo |
Principal Investigator |
IFUKUBE Tohru The University of Tokyo, Research Center for Advanced Science and Technology, Professor, 先端科学技術研究センター, 教授 (70002102)
|
Co-Investigator(Kenkyū-buntansha) |
HIROSE Michitaka The University of Tokyo, Research Center for Advanced Science and Technology, Professor, 先端科学技術研究センター, 教授 (40156716)
FUKUSHIMA Satoshi The University of Tokyo, Research Center for Advanced Science and Technology, Associate Professor, 先端科学技術研究センター, 助教授 (50285079)
HIROTA Koichi The University of Tokyo, Research Center for Advanced Science and Technology, Associate Professor, 先端科学技術研究センター, 助教授 (80273332)
INO Shuichi The University of Tokyo, Research Center for Advanced Science and Technology, Associate Professor, 先端科学技術研究センター, 助教授 (70250511)
NITADORI Yasunobu B.U.G. Inc., Researcher, 開発本部長
|
Project Period (FY) |
2002 – 2004
|
Project Status |
Completed (Fiscal Year 2004)
|
Budget Amount *help |
¥39,130,000 (Direct Cost: ¥30,100,000、Indirect Cost: ¥9,030,000)
Fiscal Year 2004: ¥9,880,000 (Direct Cost: ¥7,600,000、Indirect Cost: ¥2,280,000)
Fiscal Year 2003: ¥13,000,000 (Direct Cost: ¥10,000,000、Indirect Cost: ¥3,000,000)
Fiscal Year 2002: ¥16,250,000 (Direct Cost: ¥12,500,000、Indirect Cost: ¥3,750,000)
|
Keywords | barrier free / non-verbal information / the hearing impaired / automatic captioning system / facial expression / the visually impaired / screen reader / tactile information / 音声認識 / タクタルエイド / 復唱 / 字幕 / タクタイルエイド |
Research Abstract |
We investigated how non-verbal information should be utilized in order to improve two information acquisition tools which we designed for the visually and auditory impaired. 1.One of the tools is a new screen reader system which converts texts on the screen into synthesized speech while displaying rich text information, i.e. non-verbal information, onto a finger-tip surface by using a tactile jog-dial (TAJODA). This is a supporting tool of a web-information acquisition for visually impaired people. 2.The other tool is a new hearing assistive device (captioning system) which automatically converts speech into texts while displaying speaker's lip-movement and facial expression, i.e. non-verbal information, for the acquired hearing impaired. Research results regarding "the screen reader" are as followings. (1)The speed of synthesized speech should be changed up to 3 times faster than normal speaking rate since our findings showed the maximum listening speeds for the blind is 2.6 times faster
… More
than the normal persons. (2)The optimal number of rich texts to transmit is 7;i.e. font sizes (larger or smaller than the standard), colored letter, bold, parentheses (left or right), and paragraph. (3)Experimental results by 8 blind users showed that the acquisition speed of web sentences becomes 2.6 times faster than ordinal screen reader. Research results regarding "the captioning system" are as followings. (1)We investigated how a speaker's facial image and lip-movement (the non-verbal information) were useful for the comprehension of the sentenses (2)Experimental results show that the lip-movement was more effective than facial expression and about 2 sec delay of the lip-movement display from the captions was suitable for the fascilitation of the comprehension. (3)By combining this captioning system with the non-verbal information, the comprehension was proved apparently fascilitated through about 30 times trials at international conferences. Non-verbal information ; the tactile stimulation in TAJODA and the lip-movement in the captioning system was apparently effective to fascilitate the comprehension of contents. Furthermore, we could determin the optimal parameters of the display timing and the display method of the non-verbal information in order to improve the information assistive tools for the sensory disabled. Less
|