• 研究課題をさがす
  • 研究者をさがす
  • KAKENの使い方
  1. 課題ページに戻る

2017 年度 実施状況報告書

Multimodal Deep Learning Framework for Intelligent Brain Computer Interface System

研究課題

研究課題/領域番号 17K13279
研究機関株式会社国際電気通信基礎技術研究所

研究代表者

Penaloza C.  株式会社国際電気通信基礎技術研究所, 石黒浩特別研究所, 研究員 (80753532)

研究期間 (年度) 2017-04-01 – 2019-03-31
キーワードBrain Machine Interface
研究実績の概要

We report the progress of the BMI system that incorporates a multimodal approach to learn the correlation of the context of a task, visual sensory data, and the brain data. An experiment was conducted to determine the optimal type of interface (virtual or physical) of the BMI system. Results showed that a physical human-looking interface (human-like hands) produced an optimal feedback. Results were published in a peer-review journal. Subsequently, a human-like robot arm was acquired to perform goal-oriented task experiments. Results were submitted to a Science journal. Finally, a camera was added to the robot arm so the robot could recognized visual context using Deep Learning. The system prototype was completed. User study results were submitted to an International peer-review conference

現在までの達成度 (区分)
現在までの達成度 (区分)

2: おおむね順調に進展している

理由

The project progress has been smoothly. Although some of the activities originally planned were accomplished, other activities are still pending to be completed. Currently, multimodal sensory data integration stage has begun. A camera was added to the robot arm so the robot could analyze visual content and recognized the context. Deep Learning algorithms were implemented to achieve object detection and human action recognition. The system prototype was completed and a user study was conducted in order to confirm the proper functionality of the system. User study results were submitted to an International peer-review conference.

今後の研究の推進方策

Tactile sensors will be installed in the hands of the robot to provide information such as pressure, vibration, stiffness, and give proper feedback to the user vibrotactile feedback. A software that integrates sensing data from multiple sources and sends control commands to the robot will be developed. Finally a novel proposed approach will consider Multimodal Deep Learning principles to learn a correlation of the neural activity of the operator and visual-tactile sensory information from a robot while it performs a sequence of goal-oriented actions to complete a particular task. Experimental trials with human participants will be conducted. System performance will be evaluated based on accuracy on the task. Subject feedback will be recorded using pre-post experimental questionnaires.

次年度使用額が生じた理由

There are still several materials that need to be purchased such as sensors, processing units and a AR headset to provide novel visual feedback to users. Moreover, payment for experiments participants is also been considered. Finally, publications fees of journals and conferences are also been considered.

  • 研究成果

    (2件)

すべて 2018

すべて 雑誌論文 (1件) (うち国際共著 1件、 査読あり 1件、 オープンアクセス 1件) 学会発表 (1件) (うち国際学会 1件)

  • [雑誌論文] Android Feedback-Based Training Modulates Sensorimotor Rhythms During Motor Imagery2018

    • 著者名/発表者名
      Christian I. Penaloza , Maryam Alimardani, and Shuichi Nishio
    • 雑誌名

      IEEE Transactions on Neural Systems and Rehabilitation Engineering

      巻: 26 ページ: 666 - 674

    • DOI

      10.1109/TNSRE.2018.2792481

    • 査読あり / オープンアクセス / 国際共著
  • [学会発表] Towards Intelligent Brain-controlled Body Augmentation Robotic Limbs2018

    • 著者名/発表者名
      Christian I. Penaloza, David Hernandez and Shuichi Nishio
    • 学会等名
      IEEE International Conference on Systems, Man, and Cybernetics
    • 国際学会

URL: 

公開日: 2019-12-27  

サービス概要 検索マニュアル よくある質問 お知らせ 利用規程 科研費による研究の帰属

Powered by NII kakenhi