研究実績の概要 |
Experiments were done in vineyard using RGBD sensors, IMU, and Lidar sensors and data was collected in different phases of cultivation. The navigation part was finished considering the efficient tracking of people working in the vineyard using on-board and external cameras. High resolution RGB and point cloud data of grape data with and without occlusion has been collected. A small dataset has been labeled and tested using a deep network. The results show that more data is required for acceptable operation. Earlier systems used A-star path planner with obstacle avoidance. A new method with reinforcement learning based navigation is being tested on a small dataset. Labeling and processing of entire dataset will take more time and planned to complete in extension time. The following papers have been published in International conferences. "Semantic Scene Understanding and Region Classification for Navigation of Service Robots in Care Scenarios", The Twenty-Ninth International Symposium on Artificial Life and Robotics 2024 (AROB 29th 2024), A. Ravankar et. al. "Tracking People Across Multiple Cameras", International Conference on Communication, Computing and Data Security 2023, ICCCDS-2023, Multicon-W-2023, Thakur College of Engineering and Technology, Mumbai, India, Mumbai, India, Ryota Ishikawa, Abhijeet Ravankar.
|
今後の研究の推進方策 |
The plan for this year is to process the high resolution RGB and point cloud data of grape data with and without occlusion. The first task is labeling of the data, and later processing of entire dataset will be done. With the new dataset, a new reinforcement model will be trained for robot navigation. The classification of grape data based on clusters will enable to find the best cutting location in real-time. Moreover PCD data will enable to filter occlusion. The system will be integrated will all the sub modules of navigation, actuators, SLAM, grape classification and operation. The system will be tested in this fiscal year.
|