Acoustic signal processing for real world captioning system via see-through head mounted display
Project/Area Number |
15K20886
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Human interface and interaction
Perceptual information processing
|
Research Institution | University of Tsukuba |
Principal Investigator |
Zempo Keiichi 筑波大学, システム情報系, 助教 (70725712)
|
Project Period (FY) |
2015-04-01 – 2018-03-31
|
Project Status |
Completed (Fiscal Year 2017)
|
Budget Amount *help |
¥4,030,000 (Direct Cost: ¥3,100,000、Indirect Cost: ¥930,000)
Fiscal Year 2016: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2015: ¥2,600,000 (Direct Cost: ¥2,000,000、Indirect Cost: ¥600,000)
|
Keywords | ウエアラブル機器 / 聴覚障害支援 / 音響情報処理 / ヒューマンインターフェイス / 感覚代行システム / 情報補償 / 聴覚支援 / アレー信号処理 / 単一チャネルマイクロフォンアレー / 感覚代行 / ウエアラブル / 情報保証 / ウェアラブル |
Outline of Final Research Achievements |
The aim of this research is to realize a system that performs natural perceptual substitution to realize information guarantee for people with hearing impairment. In this research, we have realized a real world captioning system that presents subtitles by watching face-to-face dialogue by adding microphone and subtitle presentation application that can handle directionality to a see-through head mounted display (HMD) . Since ordinary HMD has a limit on the number of microphone input terminals, we developed a microphone array that can handle the beam pattern even with single channel input and developed a signal processing technique for that. For the captioning, we automatically detected the speaker and constructed an AR system that can see conversation information like a speech balloon from the face.
|
Report
(4 results)
Research Products
(10 results)