• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Development of Cognitive Symbiosis in Virtual Agents to Improve Remote Classroom Learning Outcomes

Research Project

Project/Area Number 21H03482
Research Category

Grant-in-Aid for Scientific Research (B)

Allocation TypeSingle-year Grants
Section一般
Review Section Basic Section 61020:Human interface and interaction-related
Research InstitutionOsaka University

Principal Investigator

ORLOSKY JASON  大阪大学, サイバーメディアセンター, 特任准教授(常勤) (10815111)

Co-Investigator(Kenkyū-buntansha) 白井 詩沙香  大阪大学, サイバーメディアセンター, 講師 (30757430)
清川 清  奈良先端科学技術大学院大学, 先端科学技術研究科, 教授 (60358869)
Project Period (FY) 2021-04-01 – 2025-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥17,160,000 (Direct Cost: ¥13,200,000、Indirect Cost: ¥3,960,000)
Fiscal Year 2023: ¥3,120,000 (Direct Cost: ¥2,400,000、Indirect Cost: ¥720,000)
Fiscal Year 2022: ¥5,720,000 (Direct Cost: ¥4,400,000、Indirect Cost: ¥1,320,000)
Fiscal Year 2021: ¥6,890,000 (Direct Cost: ¥5,300,000、Indirect Cost: ¥1,590,000)
KeywordsEye Tracking / State Detection / Remote Environments / Learning / virtual reality / agents / cognition / eye tracking / learning / simulation / remote interaction / education
Outline of Research at the Start

In this research, we propose a unique class of virtual agents that resonate with a user’s (learner’s) actions and emotions. To do so, we focus on three things: 1) Tailoring the virtual agent’s appearance, positioning, and interaction to the user, 2) building feedback that is responsive to the user's mental state, and 3) facilitating a long-term, trusting relationship with the agent. Unlike a game or wearable health tracker, this symbiotic training process allows the user to build deeper trust with the agent, and his or her learning will benefit from that relationship.

Outline of Annual Research Achievements

In this fiscal year, we have begun to develop several fundamental parts of the project. This includes the design of avatars as well as tests to determine whether the avatars elicit certain responses in users. The avatars are rigged with various emotional states, which can be applied in the remote learning environment. Using the lip trackers, we can also replicate and track remote participant facial expressions.

We have also designed a remote interaction environment in which mutual gaze can be visualized between an instructor and student in a remote capacity. The instructor obtains a miniature version of both the remote environment and the user within that environment. The system already facilitates mutual grasping of and gaze onto different objects within the scene, which will eventually be the learning or training materials with which the remote participants interact.

Lastly, we have continued our work on developing better models for classifying the understanding of texts in the context of education and personal study. This included the exploration of support vector machines and random forest algorithms to classify difficulties during long periods of learning and studying. The classification is focused on text-based learning, which we hypothesize can eventually be applied to 3D learning.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

In general, the project is proceeding on track. We have two publications that are currently being written, and others are planned for later in 2022. The remote environment setup has also started slightly early, though the entire project is roughly on schedule on average.

Strategy for Future Research Activity

In the next year, we plan to finish development of the remote learning environment. This will also include the integration of avatars into the simulated learning space. Later in the year, we will begin to develop and test the intelligent refinement of agent activities based on the learning environment participants' needs.

This also involves applying the aforementioned machine learning models to a 3D space. We will begin the integration of the methods that were applied to text-based learning spaces to 3D interactive learning scenarios. We will then gather feedback from learners and refine the learning tools and simulation as necessary.

Report

(1 results)
  • 2021 Annual Research Report

Research Products

(2 results)

All Other

All Int'l Joint Research (2 results)

  • [Int'l Joint Research] Augusta University(米国)

    • Country Name
      U.S.A.
    • Counterpart Institution
      Augusta University
    • Related Report
      2021 Annual Research Report
  • [Int'l Joint Research] University of California Santa Barbara(米国)

    • Country Name
      U.S.A.
    • Counterpart Institution
      University of California Santa Barbara
    • Related Report
      2021 Annual Research Report

URL: 

Published: 2021-04-28   Modified: 2023-07-19  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi