Learning 3D information and ego-motion from acoustic video in extreme underwater environment
Project/Area Number |
23K19993
|
Research Category |
Grant-in-Aid for Research Activity Start-up
|
Allocation Type | Multi-year Fund |
Review Section |
1002:Human informatics, applied informatics and related fields
|
Research Institution | The University of Tokyo |
Principal Investigator |
オウ ギョクセイ 東京大学, 大学院工学系研究科(工学部), 助教 (00984270)
|
Project Period (FY) |
2023-08-31 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥2,860,000 (Direct Cost: ¥2,200,000、Indirect Cost: ¥660,000)
Fiscal Year 2024: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2023: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
|
Keywords | Acoustic camera / 2D forward looking sonar / Deep learning / Self-supervised learning / 3D reconstruction |
Outline of Research at the Start |
This research aims to develop a novel method to understand the surrounding environment and estimate ego-motion of an underwater robot simultaneously using acoustic camera, by utilizing state-of-the-art deep learning techniques. Both simulation and field experiments will be carried out.
|
Outline of Annual Research Achievements |
Our goal is to estimate 3D information and motion from acoustic video in a supervised manner. During this fiscal year, 3D information was successfully derived from acoustic image sequences using self-supervised learning techniques. The results met our expectations. Our first paper was published at the prestigious robotics conference, IROS 2023. The next step is to estimate motion from the acoustic video. To achieve this, we extensively explored the theoretical aspects by developing a comprehensive geometric model and verifying its feasibility through rigorous testing. Early results have demonstrated the viability of the method, and we are currently drafting a detailed paper on this work. We conducted an experiment, successfully gathering data from a large-scale water tank, which will further support our findings. We also updated our simulator to make the acoustic images more realistic for better evaluation. One corresponding paper is being written on the simulator, aiming to enhance understanding and application of these techniques in practical scenarios. This additional simulator research promises to significantly bolster our analytical capabilities.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
This project aims to address two challenges simultaneously using acoustic video: first, estimating 3D information, and second, deriving motion data. The first challenge has been successfully resolved. We have now shifted our focus to the second challenge. Theoretical analyses and preliminary results have demonstrated the feasibility of our approach. This represents a significant breakthrough in the field, and we are confident that it will lead to a noteworthy publication.
|
Strategy for Future Research Activity |
In the next phase, this project will concentrate on estimating motion information from acoustic video using a self-supervised method. Initially, we will revisit and refine the theoretical foundation. Following that, numerical calculations will be performed for verification purposes. Subsequently, we will conduct simulation experiments using synthetic images to evaluate our approach. Finally, we plan to carry out experiments in a water tank to collect a real dataset. After analyzing these results, we aim to compile and submit our findings to a top-tier robotics journal.
|
Report
(1 results)
Research Products
(2 results)