Emergent Reality: Knowledge Formation from Multimodal Learning through Human-Robot Interaction in Extended Reality
Project/Area Number |
22K17981
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61050:Intelligent robotics-related
|
Research Institution | Ritsumeikan University |
Principal Investigator |
ElHafi Lotfi 立命館大学, 総合科学技術研究機構, 助教 (90821554)
|
Project Period (FY) |
2022-04-01 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2022)
|
Budget Amount *help |
¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2024: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2023: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2022: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
|
Keywords | Extended Reality / Human-Robot Interaction / Multimodal Learning / emergent reality / knowledge formation / multimodal learning / human-robot interaction / extended reality |
Outline of Research at the Start |
This proposal introduces the concept of Emergent Reality, a novel framework that combines multimodal unsupervised learning and human-robot interactions in extended reality to visualize the emergent phenomena derived from the robot's observations and intervene in its learning process.
|
Outline of Annual Research Achievements |
A human-robot interface using mixed reality (MR) was developed to reduce the user's burden during the interactive, multimodal, and on-site teaching of new knowledge to service robots. The effectiveness of the interface was evaluated using the System Usability Scale (SUS) and NASA Task Load Index (NASA-TLX) in three experimental scenarios: 1) no sharing of inference results with the user, 2) sharing inference results through voice dialogue (baseline), and 3) sharing inference results using the MR interface (proposed). The MR interface significantly reduced temporal, physical, and mental burdens compared to voice dialogue with the robot. The results were presented at IEEE/RSJ IROS 2022 and published in the Nikkan Kogyo Shimbun newspaper.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
The research has been progressing rather smoothly. In the first year, an MR interface was developed and tested for teaching new objects to service robots under human supervision, demonstrating a significant reduction in user burden compared to natural speech or gestures. For the second-year plan, the focus is on displaying multimodal observations made by robots in extended reality (XR) using human-readable representations during service interactions. To achieve this, a collaborative task between a human and a robot will be conducted, involving bidirectional sharing in XR of first-person observations of the service environment. This research is expected to further enhance the effectiveness of human-robot interaction.
|
Strategy for Future Research Activity |
In the upcoming research stages, the focus will be on exploring methods for projecting high-dimensional latent structures into extended reality (XR) to provide insights into the learning process of both humans and robots. This will involve developing novel techniques for transforming complex data into intuitive, human-readable representations in XR. The aim is to enable direct human intervention in a robot's probabilistic learning process in XR by visualizing the latent space on top of real and virtual spaces, allowing users to better understand and interact with the robot's decision-making processes.
|
Report
(1 results)
Research Products
(1 results)