Project/Area Number |
23K21688
|
Project/Area Number (Other) |
21H03482 (2021-2023)
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Multi-year Fund (2024) Single-year Grants (2021-2023) |
Section | 一般 |
Review Section |
Basic Section 61020:Human interface and interaction-related
|
Research Institution | Osaka University |
Principal Investigator |
ORLOSKY JASON 大阪大学, サイバーメディアセンター, 特任准教授(常勤) (10815111)
|
Co-Investigator(Kenkyū-buntansha) |
白井 詩沙香 大阪大学, サイバーメディアセンター, 講師 (30757430)
清川 清 奈良先端科学技術大学院大学, 先端科学技術研究科, 教授 (60358869)
|
Project Period (FY) |
2021-04-01 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2024)
|
Budget Amount *help |
¥17,160,000 (Direct Cost: ¥13,200,000、Indirect Cost: ¥3,960,000)
Fiscal Year 2024: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2023: ¥3,120,000 (Direct Cost: ¥2,400,000、Indirect Cost: ¥720,000)
Fiscal Year 2022: ¥5,720,000 (Direct Cost: ¥4,400,000、Indirect Cost: ¥1,320,000)
Fiscal Year 2021: ¥6,890,000 (Direct Cost: ¥5,300,000、Indirect Cost: ¥1,590,000)
|
Keywords | Eye Tracking / Artificial Intelligence / Learning / Agents / augmented reality / virtual reality / eye tracking / State Detection / Remote Environments / agents / cognition / learning / simulation / remote interaction / education |
Outline of Research at the Start |
This research involves integrating voice-to-text with AR for interactive learning, customizing LLMs for conversational education styles, and refining these technologies through user feedback. Evaluations will be conducted for enhanced language and content learning.
|
Outline of Annual Research Achievements |
In this period, we explored applications of virtual reality (VR) and augmented reality (AR) in education and healthcare. First, we conducted a study on educational comics that utilized eye-tracking in VR to identify key gaze features to help estimate the difficulty levels perceived by readers, suggesting a way to dynamically adjust educational content. We also developed, AMSwipe, a method that allows for gaze-based text input into virtual environments, which allows for efficient, hands-free typing without the need for physical controllers. Additionally, EyeShadows, a tool that we developed and tested with both AR and VR, improves the selection and manipulation of virtual elements using peripheral copies of items, enabling faster, more accurate interactions. Furthermore, we leveraged VR to enhance medical training, particularly by simulating the experiences of Parkinson’s disease patients to foster empathy and insight among healthcare students. These technologies demonstrate significant potential for enhancing remote education, providing immersive, interactive learning experiences that can be tailored to individual needs and capabilities. We also explored the use of interactive re-training of neural networks for applications in language learning.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
This research is generally progressing on schedule. We have several publications in different areas, and we have set up a remote environment to conduct the last phase of the research over the next year.
|
Strategy for Future Research Activity |
The last phase of the research will proceed as planned, though we have made some updates due to the advancement of AI technology. In addition, we are working on an in-situ object labeling approach, which can assist with a more specific learning task, language learning. Our system will be extended to incorporate Large Language Models(LLM), which will be customized to power virtual educational agents. This should result in more interactive and context-based learning, leading to longer retention of the concepts.
|