An optimum method of displaying speakers' face images and captions for a real-time speech-to-caption system for the deaf
Project/Area Number |
19700627
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Single-year Grants |
Research Field |
Educational technology
|
Research Institution | Tsukuba University of Technology |
Principal Investigator |
KUROKI Hayato Tsukuba University of Technology, 産業技術学部, 准教授 (00345159)
|
Co-Investigator(Renkei-kenkyūsha) |
IFUKUBE Tohru 東京大学, 先端科学技術研究センター, 教授 (70002102)
NAKANO Satoko 東京大学, 先端科学技術研究センター, 特任助教 (20359665)
HORI Kotaro 株式会社ビー・ユー・ジー
|
Project Period (FY) |
2007 – 2010
|
Project Status |
Completed (Fiscal Year 2010)
|
Budget Amount *help |
¥3,890,000 (Direct Cost: ¥3,200,000、Indirect Cost: ¥690,000)
Fiscal Year 2010: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2009: ¥1,040,000 (Direct Cost: ¥800,000、Indirect Cost: ¥240,000)
Fiscal Year 2008: ¥1,040,000 (Direct Cost: ¥800,000、Indirect Cost: ¥240,000)
Fiscal Year 2007: ¥900,000 (Direct Cost: ¥900,000)
|
Keywords | ヒューマン・インタフェース / 障害者支援 / 音声認識 / 字幕 / ノンバーバル情報 / 教育工学 / 認知科学 / 情報工学 / ユーザインタフェース / 高等教育支援 |
Research Abstract |
Our througoing real-time speech-to-caption system that using speech recognition technology with a "repeat-speaking" method showed that the accuracy of the captions is about 97% in Japanese-Japanese conversion. We found the idea to display information of captions and speaker's face movement images with a suitable way to achieve a higher comprehension. The results showed that the mouth movements contribute to improve the comprehension of the captions, and the sequence "to display the caption before the speaker's face image" improves the comprehension of the captions also.
|
Report
(6 results)
Research Products
(23 results)