• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Development of multimodal sign language recognition system considering the importance of manual signal and non-manual signal

Research Project

Project/Area Number 15K12601
Research Category

Grant-in-Aid for Challenging Exploratory Research

Allocation TypeMulti-year Fund
Research Field Rehabilitation science/Welfare engineering
Research InstitutionKyushu Institute of Technology

Principal Investigator

Saitoh Takeshi  九州工業大学, 大学院情報工学研究院, 准教授 (10379654)

Project Period (FY) 2015-04-01 – 2018-03-31
Project Status Completed (Fiscal Year 2017)
Budget Amount *help
¥2,990,000 (Direct Cost: ¥2,300,000、Indirect Cost: ¥690,000)
Fiscal Year 2017: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2016: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2015: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Keywords手話認識 / 注視情報分析 / 読唇 / 表情認識 / マルチモーダル手話認識 / 手指動作および非手指動作
Outline of Final Research Achievements

Sign language is the main communication means of hearing impaired people. This uses not only finger movement but also non-fingers movement such as lip movement and facial expression. Sign language recognition (SLR) using image processing technology is expected to be put into practical use because it uses a camera which is a non-contact sensor. However, most related studies have considered only hand movements. In this research, we examined not only SLR using fingers movement but also lip reading technique and facial expression recognition technology. This research also studied database using motion sensor to make research progress smoothly. Furthermore, we analyzed gaze point at observation of sign language scene by using gaze point estimation technique.

Report

(4 results)
  • 2017 Annual Research Report   Final Research Report ( PDF )
  • 2016 Research-status Report
  • 2015 Research-status Report
  • Research Products

    (12 results)

All 2017 2016 2015

All Presentation (12 results) (of which Int'l Joint Research: 3 results)

  • [Presentation] 再帰型ニューラルネットワークを用いたマルチモーダル手話認識2017

    • Author(s)
      児玉 知也,齊藤 剛史
    • Organizer
      第20回 画像の認識・理解シンポジウム(MIRU2017)
    • Related Report
      2017 Annual Research Report
  • [Presentation] Kinect Sensor Based Sign Language Word Recognition by Mutli-Stream HMM2017

    • Author(s)
      Tomoya Kodama, Tomoki Koyama, Takeshi Saitoh
    • Organizer
      SICE Annual Conference 2017
    • Related Report
      2017 Annual Research Report
    • Int'l Joint Research
  • [Presentation] 手話認識に有効な骨格情報の検討2017

    • Author(s)
      児玉 知也,齊藤 剛史
    • Organizer
      電子情報通信学会 音声研究会,福祉情報工学研究会
    • Related Report
      2017 Annual Research Report
  • [Presentation] フレーム連結画像を用いたCNNによる読唇2016

    • Author(s)
      齊藤 剛史
    • Organizer
      第6回バイオメトリクスと認識・認証シンポジウム(SBRA2016)
    • Place of Presentation
      芝浦工業大学(東京都江東区)
    • Year and Date
      2016-11-16
    • Related Report
      2016 Research-status Report
  • [Presentation] 距離画像のフレーム連結画像を用いたConvolutional Neural Networkによる手話単語認識2016

    • Author(s)
      橋村 佳祐,齊藤 剛史
    • Organizer
      電子情報通信学会 福祉工学研究会
    • Place of Presentation
      唐津ロイヤルホテル(佐賀県唐津市)
    • Year and Date
      2016-10-16
    • Related Report
      2016 Research-status Report
  • [Presentation] 距離画像のフレーム連結画像を用いたCNNによる手話単語認識2016

    • Author(s)
      橋村 佳祐,齊藤 剛史
    • Organizer
      第3回サイレント音声認識ワークショップ
    • Place of Presentation
      福岡朝日ビル(福岡県福岡市)
    • Year and Date
      2016-10-14
    • Related Report
      2016 Research-status Report
  • [Presentation] フレーム連結画像を用いたCNNによる読唇2016

    • Author(s)
      齊藤 剛史
    • Organizer
      第3回サイレント音声認識ワークショップ
    • Place of Presentation
      福岡朝日ビル(福岡県福岡市)
    • Year and Date
      2016-10-14
    • Related Report
      2016 Research-status Report
  • [Presentation] LBP-TOP based Facial Expression Recognition using Non Rectangular ROI2016

    • Author(s)
      Masaya Iwasaki and Takeshi Saitoh
    • Organizer
      International Conference on Information and Communication Technology Robotics (ICT-ROBOT2016)
    • Place of Presentation
      (BEXCO,釜山,韓国)
    • Year and Date
      2016-09-07
    • Related Report
      2016 Research-status Report
    • Int'l Joint Research
  • [Presentation] フレーム連結画像を用いたCNNによるシーン認識2016

    • Author(s)
      齊藤 剛史,Ziheng Zhou,Iryna Anina,Guoying Zhao,Matti Pietikainen
    • Organizer
      第19回 画像の認識・理解シンポジウム(MIRU2016)
    • Place of Presentation
      アクトシティ浜松(静岡県浜松市)
    • Year and Date
      2016-08-01
    • Related Report
      2016 Research-status Report
  • [Presentation] 手話シーン観察時の注視情報分析2016

    • Author(s)
      祐宗 高徳,渋谷 昌尚,川田 健司,齊藤 剛史
    • Organizer
      電子情報通信学会パターン認識とメディア理解研究会
    • Place of Presentation
      九州工業大学
    • Year and Date
      2016-02-21
    • Related Report
      2015 Research-status Report
  • [Presentation] Concatenated Frame Image based CNN for Visual Speech Recognition2016

    • Author(s)
      Takeshi Saitoh, Ziheng Zhou, Guoying Zhao, and Matti Pietikainen
    • Organizer
      ACCV2016 workshop: Multi-view Lip-reading/Audio-visual Challenges (MLAC2016)
    • Place of Presentation
      (Taipei International Convention Center (TICC) ,台北,台湾)
    • Related Report
      2016 Research-status Report
    • Int'l Joint Research
  • [Presentation] Light-HMMを用いた手話認識2015

    • Author(s)
      橋村 佳祐,齊藤 剛史
    • Organizer
      平成27年度電気・情報関係学会九州支部連合大会
    • Place of Presentation
      福岡大学
    • Year and Date
      2015-09-26
    • Related Report
      2015 Research-status Report

URL: 

Published: 2015-04-16   Modified: 2019-03-29  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi