Development of Cognitive Symbiosis in Virtual Agents to Improve Remote Classroom Learning Outcomes
Project/Area Number |
23K21688
|
Project/Area Number (Other) |
21H03482 (2021-2023)
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Multi-year Fund (2024) Single-year Grants (2021-2023) |
Section | 一般 |
Review Section |
Basic Section 61020:Human interface and interaction-related
|
Research Institution | Osaka University |
Principal Investigator |
ORLOSKY JASON 大阪大学, サイバーメディアセンター, 特任准教授(常勤) (10815111)
|
Co-Investigator(Kenkyū-buntansha) |
白井 詩沙香 大阪大学, サイバーメディアセンター, 講師 (30757430)
清川 清 奈良先端科学技術大学院大学, 先端科学技術研究科, 教授 (60358869)
|
Project Period (FY) |
2021-04-01 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2024)
|
Budget Amount *help |
¥17,160,000 (Direct Cost: ¥13,200,000、Indirect Cost: ¥3,960,000)
Fiscal Year 2024: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2023: ¥3,120,000 (Direct Cost: ¥2,400,000、Indirect Cost: ¥720,000)
Fiscal Year 2022: ¥5,720,000 (Direct Cost: ¥4,400,000、Indirect Cost: ¥1,320,000)
Fiscal Year 2021: ¥6,890,000 (Direct Cost: ¥5,300,000、Indirect Cost: ¥1,590,000)
|
Keywords | Eye Tracking / Artificial Intelligence / Learning / Agents / State Detection / Remote Environments / virtual reality / agents / cognition / eye tracking / learning / simulation / remote interaction / education |
Outline of Research at the Start |
This research involves integrating voice-to-text with AR for interactive learning, customizing LLMs for conversational education styles, and refining these technologies through user feedback. Evaluations will be conducted for enhanced language and content learning.
|
Outline of Annual Research Achievements |
In this fiscal year, we have begun to develop several fundamental parts of the project. This includes the design of avatars as well as tests to determine whether the avatars elicit certain responses in users. The avatars are rigged with various emotional states, which can be applied in the remote learning environment. Using the lip trackers, we can also replicate and track remote participant facial expressions.
We have also designed a remote interaction environment in which mutual gaze can be visualized between an instructor and student in a remote capacity. The instructor obtains a miniature version of both the remote environment and the user within that environment. The system already facilitates mutual grasping of and gaze onto different objects within the scene, which will eventually be the learning or training materials with which the remote participants interact.
Lastly, we have continued our work on developing better models for classifying the understanding of texts in the context of education and personal study. This included the exploration of support vector machines and random forest algorithms to classify difficulties during long periods of learning and studying. The classification is focused on text-based learning, which we hypothesize can eventually be applied to 3D learning.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
In general, the project is proceeding on track. We have two publications that are currently being written, and others are planned for later in 2022. The remote environment setup has also started slightly early, though the entire project is roughly on schedule on average.
|
Strategy for Future Research Activity |
In the next year, we plan to finish development of the remote learning environment. This will also include the integration of avatars into the simulated learning space. Later in the year, we will begin to develop and test the intelligent refinement of agent activities based on the learning environment participants' needs.
This also involves applying the aforementioned machine learning models to a 3D space. We will begin the integration of the methods that were applied to text-based learning spaces to 3D interactive learning scenarios. We will then gather feedback from learners and refine the learning tools and simulation as necessary.
|
Report
(1 results)
Research Products
(2 results)