Project/Area Number |
15K12601
|
Research Category |
Grant-in-Aid for Challenging Exploratory Research
|
Allocation Type | Multi-year Fund |
Research Field |
Rehabilitation science/Welfare engineering
|
Research Institution | Kyushu Institute of Technology |
Principal Investigator |
Saitoh Takeshi 九州工業大学, 大学院情報工学研究院, 准教授 (10379654)
|
Project Period (FY) |
2015-04-01 – 2018-03-31
|
Project Status |
Completed (Fiscal Year 2017)
|
Budget Amount *help |
¥2,990,000 (Direct Cost: ¥2,300,000、Indirect Cost: ¥690,000)
Fiscal Year 2017: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2016: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2015: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
|
Keywords | 手話認識 / 注視情報分析 / 読唇 / 表情認識 / マルチモーダル手話認識 / 手指動作および非手指動作 |
Outline of Final Research Achievements |
Sign language is the main communication means of hearing impaired people. This uses not only finger movement but also non-fingers movement such as lip movement and facial expression. Sign language recognition (SLR) using image processing technology is expected to be put into practical use because it uses a camera which is a non-contact sensor. However, most related studies have considered only hand movements. In this research, we examined not only SLR using fingers movement but also lip reading technique and facial expression recognition technology. This research also studied database using motion sensor to make research progress smoothly. Furthermore, we analyzed gaze point at observation of sign language scene by using gaze point estimation technique.
|