2018 Fiscal Year Final Research Report
Multimodal Deep Learning Framework for Intelligent Brain Computer Interface System
Project/Area Number |
17K13279
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Brain biometrics
|
Research Institution | Advanced Telecommunications Research Institute International |
Principal Investigator |
Penaloza Christian 株式会社国際電気通信基礎技術研究所, 石黒浩特別研究所, 連携研究員 (80753532)
|
Research Collaborator |
Hernandez-Carmona David
|
Project Period (FY) |
2017-04-01 – 2019-03-31
|
Keywords | BMI / ロボット / 脳波 / EEG |
Outline of Final Research Achievements |
We developed a BMI system that incorporates a multimodal approach to learn the correlation of the context of a task, visual sensory data, and the brain data. The platform to test this system consisted of a human-like robotic arm controlled with a BMI. The proposed arm can be activated (i.e. grasp action) when the human operator imagines the grasping action. Since there are different ways that the arm can perform the action (i.e. different grasping configurations) depending on the context (i.e. type of the object), the arm can recognize the object and choose the best grasping configuration. Moreover, we proposed a method to decode visual shape of the objects from brain data. More specifically, we recorded EEG data during an object-grasping experiment and use the EEG to reconstruct the image of the object. To achieve this goal, we developed a deep stacked convolutional autoencoder that learned a noise-free joint representation of the EEG and object image.
|
Free Research Field |
ブレインマシンインターフェース
|
Academic Significance and Societal Importance of the Research Achievements |
A brain machine interface is a technology that will revolutionize the way people interact with external devices in the future. In this research, our system can be used to augment the capabilities of users to perform multiple tasks by controlling an intelligent semi-autonomous robotic arm.
|