• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Research and Development of an Automated, Interactive, and User-Configurable Conversational Agent for Always-Available Personalized Language Tutoring

Research Project

Project/Area Number 21K17779
Research Category

Grant-in-Aid for Early-Career Scientists

Allocation TypeMulti-year Fund
Review Section Basic Section 61020:Human interface and interaction-related
Research InstitutionThe University of Tokyo

Principal Investigator

Zhang Xinlei  東京大学, 大学院情報学環・学際情報学府, 特任研究員 (60898138)

Project Period (FY) 2021-04-01 – 2022-03-31
Project Status Discontinued (Fiscal Year 2021)
Budget Amount *help
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2023: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2022: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2021: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
KeywordsTutoring Agent / Speech Recognition / Multi-Modal Interface / Language Learning / Device Wakeup
Outline of Research at the Start

This research aims to develop a one-of-its-kind language tutoring agent that is fully-automated, adaptive, and user-configurable. To achieve this goal, I plan to 1) develop an architecture to allow users to generate and customize the agent through simple text editing. 2) Develop a technology to chunk the speech template for difficulty adjustments. 3) Evaluate such agents' usability and learning outputs in self-studies. The agent can serve as a complementary assistant to help language tutors train students, or serve as a personalized language tutor to teach every single student in self study.

Outline of Annual Research Achievements

During the six months of conducting this project, I mainly achieved two goals: 1) Developed a prototype system to allow users to generate and customize the agent through simple text editing. The system takes a text file containing the transcript of the template speech and the user's feedback mode, then creates the tutoring agent accordingly for adaptive and personalized tutoring. It is part of a paper published in EICS 2021 now.

2) Developed a novel way to allow users to awake the device by changing the prosody when speaking the keyword (e.g., Alexa) for accurate device activations. Evaluation studies show significant advantages of this method compared to Keyword Spotting based method. The results are summarized into a top-conference paper which is under review.

Report

(1 results)
  • 2021 Annual Research Report
  • Research Products

    (2 results)

All 2021

All Journal Article (1 results) (of which Int'l Joint Research: 1 results,  Peer Reviewed: 1 results) Presentation (1 results) (of which Int'l Joint Research: 1 results)

  • [Journal Article] JustSpeak: Automated, User-Configurable, Interactive Agents for Speech Tutoring2021

    • Author(s)
      Xinlei Zhang, Takashi Miyaki, and Jun Rekimoto
    • Journal Title

      Proc. ACM Hum.-Comput. Interact. 5, EICS

      Volume: Article 202 Issue: EICS Pages: 24-24

    • DOI

      10.1145/3459744

    • Related Report
      2021 Annual Research Report
    • Peer Reviewed / Int'l Joint Research
  • [Presentation] JustSpeak: Automated, User-Configurable, Interactive Agents for Speech Tutoring2021

    • Author(s)
      Zhang Xinlei
    • Organizer
      The 13th Engineering Interactive Computing Systems (EICS) conference
    • Related Report
      2021 Annual Research Report
    • Int'l Joint Research

URL: 

Published: 2021-04-28   Modified: 2023-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi