• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2023 Fiscal Year Final Research Report

End-to-End Model for Task-Independent Speech Understanding and Dialogue

Research Project

  • PDF
Project/Area Number 20H00602
Research Category

Grant-in-Aid for Scientific Research (A)

Allocation TypeSingle-year Grants
Section一般
Review Section Medium-sized Section 61:Human informatics and related fields
Research InstitutionKyoto University

Principal Investigator

Kawahara Tatsuya  京都大学, 情報学研究科, 教授 (00234104)

Co-Investigator(Kenkyū-buntansha) 井上 昂治  京都大学, 情報学研究科, 助教 (10838684)
吉井 和佳  京都大学, 情報学研究科, 准教授 (20510001)
Project Period (FY) 2020-04-01 – 2024-03-31
Keywords音声理解 / 音声対話 / 音声認識 / End-to-Endモデル
Outline of Final Research Achievements

For general-purpose speech understanding and dialogue based on the end-to-end models, various studies were conducted from the perspective of advanced speech recognition and dialogue generation. First, we designed and implemented an end-to-end system that directly recognizes dialogue acts and emotions from speech. Next, we proposed an effective learning method for speech recognition of low-resource languages by integrating speaker, language and domain recognition. We also built a model for generating punctuated and cleaned text directly from speech. Furthermore, we studied how to integrate emotion recognition with speech and gender recognition for effective learning. With regard to dialogue generation, end-to-end models represented by the large-scale language models have become the mainstream, and we proposed a mechanism to reason the user's intention and emotion and the system's intention and emotion before response generation.

Free Research Field

知能情報学

Academic Significance and Societal Importance of the Research Achievements

音声認識はend-to-endモデルを大規模なデータで学習することで、大きな性能の向上を実現したが、少資源言語の音声認識や感情認識の性能はまだ十分でない。これに対して、様々な音声の属性を統合することで、大きな改善が得られることを示した。
対話生成においても大規模言語モデルが隆盛を極めているが、ロボットなどに実装する際には意図や感情などの内部状態のモデルを構築・学習することで、共感的・共生的なシステムの実現につながることが期待される。

URL: 

Published: 2025-01-30  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi