2017 Fiscal Year Research-status Report
Multimodal Deep Learning Framework for Intelligent Brain Computer Interface System
Project/Area Number |
17K13279
|
Research Institution | Advanced Telecommunications Research Institute International |
Principal Investigator |
Penaloza C. 株式会社国際電気通信基礎技術研究所, 石黒浩特別研究所, 研究員 (80753532)
|
Project Period (FY) |
2017-04-01 – 2019-03-31
|
Keywords | Brain Machine Interface |
Outline of Annual Research Achievements |
We report the progress of the BMI system that incorporates a multimodal approach to learn the correlation of the context of a task, visual sensory data, and the brain data. An experiment was conducted to determine the optimal type of interface (virtual or physical) of the BMI system. Results showed that a physical human-looking interface (human-like hands) produced an optimal feedback. Results were published in a peer-review journal. Subsequently, a human-like robot arm was acquired to perform goal-oriented task experiments. Results were submitted to a Science journal. Finally, a camera was added to the robot arm so the robot could recognized visual context using Deep Learning. The system prototype was completed. User study results were submitted to an International peer-review conference
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
The project progress has been smoothly. Although some of the activities originally planned were accomplished, other activities are still pending to be completed. Currently, multimodal sensory data integration stage has begun. A camera was added to the robot arm so the robot could analyze visual content and recognized the context. Deep Learning algorithms were implemented to achieve object detection and human action recognition. The system prototype was completed and a user study was conducted in order to confirm the proper functionality of the system. User study results were submitted to an International peer-review conference.
|
Strategy for Future Research Activity |
Tactile sensors will be installed in the hands of the robot to provide information such as pressure, vibration, stiffness, and give proper feedback to the user vibrotactile feedback. A software that integrates sensing data from multiple sources and sends control commands to the robot will be developed. Finally a novel proposed approach will consider Multimodal Deep Learning principles to learn a correlation of the neural activity of the operator and visual-tactile sensory information from a robot while it performs a sequence of goal-oriented actions to complete a particular task. Experimental trials with human participants will be conducted. System performance will be evaluated based on accuracy on the task. Subject feedback will be recorded using pre-post experimental questionnaires.
|
Causes of Carryover |
There are still several materials that need to be purchased such as sensors, processing units and a AR headset to provide novel visual feedback to users. Moreover, payment for experiments participants is also been considered. Finally, publications fees of journals and conferences are also been considered.
|
Research Products
(2 results)