• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Smart drone audition: A search and rescue drone system that listens and communicates

Research Project

Project/Area Number 22KF0141
Project/Area Number (Other) 22F22769 (2022)
Research Category

Grant-in-Aid for JSPS Fellows

Allocation TypeMulti-year Fund (2023)
Single-year Grants (2022)
Section外国
Review Section Basic Section 20020:Robotics and intelligent system-related
Research InstitutionTokyo Institute of Technology

Principal Investigator

中臺 一博  東京工業大学, 工学院, 教授 (70436715)

Co-Investigator(Kenkyū-buntansha) YEN BENJAMIN  東京工業大学, 工学院, 外国人特別研究員
Project Period (FY) 2023-03-08 – 2025-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥2,200,000 (Direct Cost: ¥2,200,000)
Fiscal Year 2024: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 2023: ¥1,100,000 (Direct Cost: ¥1,100,000)
Fiscal Year 2022: ¥500,000 (Direct Cost: ¥500,000)
Keywordsドローン聴覚 / 音響信号処理 / 深層学習
Outline of Research at the Start

This research aims to develop a drone intended for search and rescue. Equipped with "ears", the drone "listens" to an audible target and, in turn, promotes effective communication with them. "Listening" includes recognising, locating and recording the target sound "clearly" (i.e. free of noise from the drone's rotors and surroundings) under harsh, noisy environments, typical in search and rescue.

Outline of Annual Research Achievements

This research year, we developed and implemented a real-life sound source tracking system using drones equipped with microphone arrays. Previously, such systems were only demonstrated in simulations. Due to strict drone flight regulations in Japan, we designed an indoor system with miniature drones and custom microphones to navigate these restrictions. We also enhanced the system with a drone navigation setup that continuously adjusts drone positions to maximize sound tracking accuracy. Further modifications to the sound tracking algorithms were necessary to address real-life constraints and challenges.
Additionally, we have advanced drone noise reduction techniques for improved sound source tracking, showing promising results in simulations. Real-life testing, however, unveiled unforeseen challenges not anticipated in the simulation phase, necessitating further simulations and experimental tests to refine our approach and meet these new requirements.
These developments contribute to our smart drone audition research theme, utilizing multiple drones with autonomous navigation to enhance sound source tracking performance. This system not only aims to improve the accuracy of locating sound sources but also enhances the quality of recorded audio by minimizing drone noise interference. These advancements are expected to significantly benefit applications where audio clarity and quality are critical.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

Given that it is our understanding that until now, no one has developed a multi-drone system that is used to perform sound source tracking, it is under the expectation that developing such a system will be challenging and will take a considerable amount of time. Several components in the system required custom hardware, and many trial-and-errors were needed to overcome the practical challenges and issues that arose, of which many have very limited information or solutions available.
Due to the time spent on developing this system, there has been limited development in the algorithm aspect of the research, which has impacted the research output in terms of publications.
However, most of the challenges have finally been overcome, and we now have a working real-life system to perform the designated task. We now have a development platform to prototype and test any new algorithmic developments of sound source tracking, and even other drone audition-related algorithms. We expect the number of research outputs to increase from this point onwards.

Strategy for Future Research Activity

We intend to continue the smart drone audition research in the following ways:
1) We intend to make use of the newly developed indoor system to prototype new sound source tracking algorithms, and more importantly drone noise reduction techniques to improve the performance of multi-drone sound source tracking. Such algorithms include namely i) drone noise reduction techniques, ii) sound source recognition (for multiple sound source scenarios, where identifying and ensuring the correct sound source is tracked is important) and if possible, iii) implemented obstacle avoidance and environmental mapping functionalities to improve the drone’s self-reliant capabilities.
2) Using the experience and software developed from the indoor system, we intend to expand and develop a full-sized outdoor multi-drone system for practical sound source tracking. This not only includes development of relevant algorithms, but also design choices in the hardware of the full-sized system. We also intent to integrate the multi-drone sound source tracking system with other forms of robots to allow cooperative sound source tracking to be carried out.

Report

(2 results)
  • 2023 Research-status Report
  • 2022 Annual Research Report
  • Research Products

    (13 results)

All 2024 2023 2022 Other

All Int'l Joint Research (1 results) Journal Article (1 results) (of which Int'l Joint Research: 1 results,  Peer Reviewed: 1 results,  Open Access: 1 results) Presentation (10 results) (of which Int'l Joint Research: 5 results,  Invited: 2 results) Remarks (1 results)

  • [Int'l Joint Research] The University of Auckland/Victoria University of Wellington(ニュージーランド)

    • Related Report
      2023 Research-status Report
  • [Journal Article] Rotor Noise-Aware Noise Covariance Matrix Estimation for Unmanned Aerial Vehicle Audition2023

    • Author(s)
      Yen Benjamin、Li Yameizhen、Hioka Yusuke
    • Journal Title

      IEEE/ACM Transactions on Audio, Speech, and Language Processing

      Volume: 31 Pages: 2491-2506

    • DOI

      10.1109/taslp.2023.3288410

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Open Access / Int'l Joint Research
  • [Presentation] Real Time Sound Source Localization Using Von-Mises ResNet2024

    • Author(s)
      Mert Bozkurtlar, Benjamin Yen, Katsutoshi Itoyama, Kazuhiro Nakadai
    • Organizer
      IEEE/SICE International Symposium on System Integration (SII)
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Robot Audition 5.0 and Beyond, Southern University of Science and Technology2023

    • Author(s)
      Kazuhiro Nakadai
    • Organizer
      Southern University of Science and Technology (SUSTech)
    • Related Report
      2023 Research-status Report
    • Invited
  • [Presentation] Performance evaluation of sound source localisation and tracking methods using multiple drones2023

    • Author(s)
      Benjamin Yen, Taiki Yamada, Katsutoshi Itoyama, Kazuhiro Nakadai
    • Organizer
      Internoise 2023
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Development of a continuous classroom signal-to-noise ratio measurement system2023

    • Author(s)
      Benjamin Yen, C. T. Justine Hui, Esther Bergin, Eleesa Jensen, Suzanne C. Purdy, William Keith, Yusuke Hioka, James Whitlock, George Dodd
    • Organizer
      Internoise 2023
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Rotor Noise-Informed Sound Source Tracking with Multiple Drones Using Microphone Arrays2023

    • Author(s)
      Benjamin Yen, Taiki Yamada, Katsutoshi Itoyama, Kazuhiro Nakadai
    • Organizer
      IEEE/RSJ International Conference on Intellignet Robots and Systems (IROS 2023) LBR
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] PyHARK: A Python Package for Robot Audition Based on HARK2023

    • Author(s)
      Kazuhiro Nakadai, Masayuki Takigahira, Katsutoshi Itoyama
    • Organizer
      IEEE/RSJ International Conference on Intellignet Robots and Systems (IROS 2023) LBR
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Robot Audition 5.0 and Beyond2023

    • Author(s)
      Kazuhiro Nakadai
    • Organizer
      POSTECH
    • Related Report
      2023 Research-status Report
    • Invited
  • [Presentation] ドローンのローターノイズによる地表材質推定手法の検討2023

    • Author(s)
      矢野 翼, 糸山 克寿, 西田 健次, 中臺 一博
    • Organizer
      SICE SI 2023
    • Related Report
      2023 Research-status Report
  • [Presentation] Few-shot detection on Drone Captured Scenarios2023

    • Author(s)
      Md Ragib Amin Nihal, Benjamin Yen, Katsutoshi Itoyama, Kazuhiro Nakadai
    • Organizer
      日本ロボット学会学術講演会
    • Related Report
      2023 Research-status Report
  • [Presentation] Rotor noise power spectral density informed sound source enhancement and localisation for unmanned aerial vehicles2022

    • Author(s)
      Benjamin Yen and Yusuke Hioka
    • Organizer
      第61回人工知能学会 AI チャレンジ研究会
    • Related Report
      2022 Annual Research Report
  • [Remarks] 東京工業大学中臺研究室~災害救助~

    • URL

      https://www.ra.sc.e.titech.ac.jp/research/rescue/

    • Related Report
      2023 Research-status Report

URL: 

Published: 2022-11-17   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi