2023 Fiscal Year Final Research Report
Development of generating an auto-encoder for abstract target without explicit classification
Project/Area Number |
21K12079
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Review Section |
Basic Section 61050:Intelligent robotics-related
|
Research Institution | Ichinoseki National College of Technology (2023) Ritsumeikan University (2021-2022) |
Principal Investigator |
Matsuo Tadashi 一関工業高等専門学校, その他部局等, 准教授 (80449545)
|
Co-Investigator(Kenkyū-buntansha) |
島田 伸敬 立命館大学, 情報理工学部, 教授 (10294034)
|
Project Period (FY) |
2021-04-01 – 2024-03-31
|
Keywords | 機械学習 / 教師なし学習 / 自己符号化器 / コンピュータビジョン |
Outline of Final Research Achievements |
When a robot hand grasps granular foodstuff such as green onion, the estimation of grasped weight can be considered as extraction of essential information from complex and indefinite situation such as a state of the foodstuff. Although the relation between a robot action and a grasped weight will depends on a state of the foodstuff, but we cannot repeat an experiment on the same situation because one action will destroy the situation. So, we proposed a framework for estimating the relation between an action and a grasped weight from an image of the foodstuff that represents the situation. The framework can also estimate the probalistic distribution of weights and it can be applied to various robot actions with probalistic results. We applied the framework to experiments of grasping green onion with a soft robot hand and we confirmed that the framework will work well.
|
Free Research Field |
画像認識
|
Academic Significance and Societal Importance of the Research Achievements |
本研究ではある状況下でのロボット動作によって得られる結果を確率付きで予測する方法を開発した.提案方法では状況を事前に分類する必要がなく,実際の状況とロボット動作情報,及び結果情報があれば予測モデルを組み立てることができ,人手による分析の手間を抑えることができる.ロボットハンドを用いて細かいネギの山から適量を把持する実験にこれを適用し,実際に把持される量の確率分布を予測できることを確認した.
|