Study of affective sounds applied to the auditory brain-machine interface towards smooth communication
Project/Area Number |
16K16477
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Rehabilitation science/Welfare engineering
|
Research Institution | Chiba University (2017) National Rehabilitation Center for Persons with Disabilities (2016) |
Principal Investigator |
Onishi Akinari 千葉大学, フロンティア医工学センター, 特任研究員 (20747969)
|
Project Period (FY) |
2016-04-01 – 2018-03-31
|
Project Status |
Completed (Fiscal Year 2017)
|
Budget Amount *help |
¥2,210,000 (Direct Cost: ¥1,700,000、Indirect Cost: ¥510,000)
Fiscal Year 2017: ¥780,000 (Direct Cost: ¥600,000、Indirect Cost: ¥180,000)
Fiscal Year 2016: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
|
Keywords | ユーザインタフェース / リハビリテーション / ブレイン-マシン・インタフェース(BMI) / P300 / 情動 / ブレイン-マシン・インターフェイス / ブレイン-コンピュータ・インターフェイス / 脳波 / 感情 / ユーザインターフェース |
Outline of Final Research Achievements |
Brain-machine interface (BMI) translates brain signals such as electroencephalography (EEG) into commands for controlling devices. Since the BMI can be driven by thinking, the device would be a next generation of communication aids for persons with disabilities. In order to improve BMIs that works by counting stimuli silently, affective sounds were introduced to the BMI and the effect of the affective sounds were evaluated. The results of this study implied that the BMI performance was improved by an affective sound. Moreover, a patient with amyotrophic lateral sclerosis operated BMI with 90% classification accuracy. In addition, ensemble convoluted feature extraction, which took advantage of EEG difference caused by the sounds, showed higher classification accuracy than a traditional multidimensional time-series feature.
|
Report
(3 results)
Research Products
(5 results)