研究課題/領域番号 |
17K13279
|
研究機関 | 株式会社国際電気通信基礎技術研究所 |
研究代表者 |
Penaloza C. 株式会社国際電気通信基礎技術研究所, 石黒浩特別研究所, 研究員 (80753532)
|
研究期間 (年度) |
2017-04-01 – 2019-03-31
|
キーワード | Brain Machine Interface |
研究実績の概要 |
We report the progress of the BMI system that incorporates a multimodal approach to learn the correlation of the context of a task, visual sensory data, and the brain data. An experiment was conducted to determine the optimal type of interface (virtual or physical) of the BMI system. Results showed that a physical human-looking interface (human-like hands) produced an optimal feedback. Results were published in a peer-review journal. Subsequently, a human-like robot arm was acquired to perform goal-oriented task experiments. Results were submitted to a Science journal. Finally, a camera was added to the robot arm so the robot could recognized visual context using Deep Learning. The system prototype was completed. User study results were submitted to an International peer-review conference
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
The project progress has been smoothly. Although some of the activities originally planned were accomplished, other activities are still pending to be completed. Currently, multimodal sensory data integration stage has begun. A camera was added to the robot arm so the robot could analyze visual content and recognized the context. Deep Learning algorithms were implemented to achieve object detection and human action recognition. The system prototype was completed and a user study was conducted in order to confirm the proper functionality of the system. User study results were submitted to an International peer-review conference.
|
今後の研究の推進方策 |
Tactile sensors will be installed in the hands of the robot to provide information such as pressure, vibration, stiffness, and give proper feedback to the user vibrotactile feedback. A software that integrates sensing data from multiple sources and sends control commands to the robot will be developed. Finally a novel proposed approach will consider Multimodal Deep Learning principles to learn a correlation of the neural activity of the operator and visual-tactile sensory information from a robot while it performs a sequence of goal-oriented actions to complete a particular task. Experimental trials with human participants will be conducted. System performance will be evaluated based on accuracy on the task. Subject feedback will be recorded using pre-post experimental questionnaires.
|
次年度使用額が生じた理由 |
There are still several materials that need to be purchased such as sensors, processing units and a AR headset to provide novel visual feedback to users. Moreover, payment for experiments participants is also been considered. Finally, publications fees of journals and conferences are also been considered.
|