2022 Fiscal Year Final Research Report
Prosthetic Hand System Using Image Recognition and EMG
Project/Area Number |
19K12168
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Review Section |
Basic Section 61050:Intelligent robotics-related
|
Research Institution | Toyohashi University of Technology |
Principal Investigator |
Fukumura Naohiro 豊橋技術科学大学, 工学(系)研究科(研究院), 准教授 (90293753)
|
Project Period (FY) |
2019-04-01 – 2023-03-31
|
Keywords | 筋電義手 / 画像認識 / 視覚‐運動変換モデル / 把持手形状決定 / オートエンコーダ / ニューラルネットワーク |
Outline of Final Research Achievements |
For patients who have lost a part of their upper limb, such as a hand, due to an accident, a prosthetic hand that operates according to their intention is essential to improve their quality of life. Electromyographic prosthetic hands, which have been studied for a long time, face challenges in complex control due to weak electromyographic signals. However, by combining robot hand control based on image recognition using AI technology, it is possible to achieve precise control of the prosthetic hand. In this study, we aimed to realize this system and verified the visual-motor transformation model using deep learning models through experiments with a real robot hand. We demonstrated that the model can recognize the diameter of a cup from its image and calculate the appropriate grasping hand shape based on the size of the cup. Furthermore, we developed a prototype of the prosthetic hand system that combines image and electromyographic signals and validated its effectiveness.
|
Free Research Field |
計算論的神経科学
|
Academic Significance and Societal Importance of the Research Achievements |
本研究で目指している義手システムはユーザーの意図を筋電から読み取り、画像認識技術と併用することで、ロボットハンドを精密に制御できることを特徴としており、義手使用者のQOLの大きな改善が期待できると共に、音声入力を用いることで、高齢者や寝たきりの患者の生活サポートロボットなどにも応用できる。また、検証してきた視覚-運動変換モデルは多くの深層学習の手法と異なり、ロボット制御に必要な物体の特徴量を中間層に教師信号を使わずに抽出でき、そしてそれを介した拘束条件付き最適化問題を容易に解ける。さらに、この情報抽出は画像と姿勢情報の統合を通して実現しており、マルチモーダル情報統合の有用性も示している。
|