Project/Area Number |
18K04046
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Review Section |
Basic Section 20020:Robotics and intelligent system-related
|
Research Institution | University of Yamanashi |
Principal Investigator |
|
Project Period (FY) |
2018-04-01 – 2021-03-31
|
Project Status |
Completed (Fiscal Year 2020)
|
Budget Amount *help |
¥4,420,000 (Direct Cost: ¥3,400,000、Indirect Cost: ¥1,020,000)
Fiscal Year 2020: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2019: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
Fiscal Year 2018: ¥2,210,000 (Direct Cost: ¥1,700,000、Indirect Cost: ¥510,000)
|
Keywords | 制御工学 / 移動ロボット / 視覚情報 / 自己位置推定 / センサ融合 / ビジュアルフィードバック制御 / パーティクルフィルタ / 全天球カメラ / 魚眼カメラ / 隊列制御 |
Outline of Final Research Achievements |
A method of recognizing objects from images by a camera attached to a robot is feasible for advanced tasks such as search, surveillance, and object detection. In this research, we construct a recognition system that integrates camera and multiple physical sensors, and use a probabilistic model in order to deal with the uncertainty in image recognition. Furthermore, to improve the image recognition capability of the robot, we introduced a fish-eye camera and a spherical camera. We developed a new method of recognizing the image distortion characteristics of the cameras and a new control strategy considering of the characteristics. We developed a method for estimating of walking motion of a person using multiple models and then proposed a control law that enables a mobile robot to follow the person based on it. We integrated the information from the camera and physical sensors, created a learning model by machine learning, and applied it to self-localization estimation of a mobile robot.
|
Academic Significance and Societal Importance of the Research Achievements |
魚眼カメラや全天球カメラによる移動ロボットの制御法を検討した。従来のカメラから魚眼カメラ等に置き換えることによりロボットの視野が広がり、衝突回避や搭載するカメラ台数の削減を可能にする。複数の確率モデルを利用した推定法により人物の不規則な運動に対応できる。またカメラとセンサを融合し、それを学習に利用することで、従来法では難しい環境下でのロボットの自己位置推定法を実現した。提案手法により複雑な環境下での良好なロボットの画像認識や制御が可能となる。本研究課題の成果を利用することでより実用的なロボットシステムの開発が期待できる。
|