研究課題/領域番号 |
23K19993
|
研究種目 |
研究活動スタート支援
|
配分区分 | 基金 |
審査区分 |
1002:人間情報学、応用情報学およびその関連分野
|
研究機関 | 東京大学 |
研究代表者 |
オウ ギョクセイ 東京大学, 大学院工学系研究科(工学部), 助教 (00984270)
|
研究期間 (年度) |
2023-08-31 – 2025-03-31
|
研究課題ステータス |
交付 (2023年度)
|
配分額 *注記 |
2,860千円 (直接経費: 2,200千円、間接経費: 660千円)
2024年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2023年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
|
キーワード | Acoustic camera / 2D forward looking sonar / Deep learning / Self-supervised learning / 3D reconstruction |
研究開始時の研究の概要 |
This research aims to develop a novel method to understand the surrounding environment and estimate ego-motion of an underwater robot simultaneously using acoustic camera, by utilizing state-of-the-art deep learning techniques. Both simulation and field experiments will be carried out.
|
研究実績の概要 |
Our goal is to estimate 3D information and motion from acoustic video in a supervised manner. During this fiscal year, 3D information was successfully derived from acoustic image sequences using self-supervised learning techniques. The results met our expectations. Our first paper was published at the prestigious robotics conference, IROS 2023. The next step is to estimate motion from the acoustic video. To achieve this, we extensively explored the theoretical aspects by developing a comprehensive geometric model and verifying its feasibility through rigorous testing. Early results have demonstrated the viability of the method, and we are currently drafting a detailed paper on this work. We conducted an experiment, successfully gathering data from a large-scale water tank, which will further support our findings. We also updated our simulator to make the acoustic images more realistic for better evaluation. One corresponding paper is being written on the simulator, aiming to enhance understanding and application of these techniques in practical scenarios. This additional simulator research promises to significantly bolster our analytical capabilities.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
This project aims to address two challenges simultaneously using acoustic video: first, estimating 3D information, and second, deriving motion data. The first challenge has been successfully resolved. We have now shifted our focus to the second challenge. Theoretical analyses and preliminary results have demonstrated the feasibility of our approach. This represents a significant breakthrough in the field, and we are confident that it will lead to a noteworthy publication.
|
今後の研究の推進方策 |
In the next phase, this project will concentrate on estimating motion information from acoustic video using a self-supervised method. Initially, we will revisit and refine the theoretical foundation. Following that, numerical calculations will be performed for verification purposes. Subsequently, we will conduct simulation experiments using synthetic images to evaluate our approach. Finally, we plan to carry out experiments in a water tank to collect a real dataset. After analyzing these results, we aim to compile and submit our findings to a top-tier robotics journal.
|