2017 Fiscal Year Final Research Report
Development of multimodal sign language recognition system considering the importance of manual signal and non-manual signal
Project/Area Number |
15K12601
|
Research Category |
Grant-in-Aid for Challenging Exploratory Research
|
Allocation Type | Multi-year Fund |
Research Field |
Rehabilitation science/Welfare engineering
|
Research Institution | Kyushu Institute of Technology |
Principal Investigator |
Saitoh Takeshi 九州工業大学, 大学院情報工学研究院, 准教授 (10379654)
|
Project Period (FY) |
2015-04-01 – 2018-03-31
|
Keywords | 手話認識 / 注視情報分析 / 読唇 / 表情認識 |
Outline of Final Research Achievements |
Sign language is the main communication means of hearing impaired people. This uses not only finger movement but also non-fingers movement such as lip movement and facial expression. Sign language recognition (SLR) using image processing technology is expected to be put into practical use because it uses a camera which is a non-contact sensor. However, most related studies have considered only hand movements. In this research, we examined not only SLR using fingers movement but also lip reading technique and facial expression recognition technology. This research also studied database using motion sensor to make research progress smoothly. Furthermore, we analyzed gaze point at observation of sign language scene by using gaze point estimation technique.
|
Free Research Field |
人間医工学・リハビリテーション科学・福祉工学
|