2010 Fiscal Year Final Research Report
An optimum method of displaying speakers' face images and captions for a real-time speech-to-caption system for the deaf
Project/Area Number |
19700627
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Single-year Grants |
Research Field |
Educational technology
|
Research Institution | Tsukuba University of Technology |
Principal Investigator |
KUROKI Hayato Tsukuba University of Technology, 産業技術学部, 准教授 (00345159)
|
Co-Investigator(Renkei-kenkyūsha) |
IFUKUBE Tohru 東京大学, 先端科学技術研究センター, 教授 (70002102)
NAKANO Satoko 東京大学, 先端科学技術研究センター, 特任助教 (20359665)
HORI Kotaro 株式会社ビー・ユー・ジー
|
Project Period (FY) |
2007 – 2010
|
Keywords | ヒューマン・インタフェース / 障害者支援 / 音声認識 / 字幕 / ノンバーバル情報 / 教育工学 / 認知科学 / 情報工学 |
Research Abstract |
Our througoing real-time speech-to-caption system that using speech recognition technology with a "repeat-speaking" method showed that the accuracy of the captions is about 97% in Japanese-Japanese conversion. We found the idea to display information of captions and speaker's face movement images with a suitable way to achieve a higher comprehension. The results showed that the mouth movements contribute to improve the comprehension of the captions, and the sequence "to display the caption before the speaker's face image" improves the comprehension of the captions also.
|