Project/Area Number |
22K17981
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61050:Intelligent robotics-related
|
Research Institution | Ritsumeikan University |
Principal Investigator |
ElHafi Lotfi 立命館大学, 総合科学技術研究機構, 准教授 (90821554)
|
Project Period (FY) |
2022-04-01 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2024: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2023: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2022: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
|
Keywords | Extended Reality / Human-Robot Interaction / Multimodal Learning / emergent reality / knowledge formation / multimodal learning / human-robot interaction / extended reality |
Outline of Research at the Start |
This proposal introduces the concept of Emergent Reality, a novel framework that combines multimodal unsupervised learning and human-robot interactions in extended reality to visualize the emergent phenomena derived from the robot's observations and intervene in its learning process.
|
Outline of Annual Research Achievements |
Significant progress has been made in human-robot interactive learning within extended reality with two main achievements: 1) a mixed reality-based 6D-pose annotation system for robot manipulation in service environments, enhancing the accuracy of pose annotation and reducing positional errors, and 2) an interactive learning system for 3D semantic segmentation with autonomous mobile robots, improving segmentation accuracy in new environments and predicting new object classes with minimal additional annotations. Both achievements focused on creating human-readable representations that facilitate a deeper understanding of service robots' learning processes.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
The research is advancing smoothly, building upon the first year's development of a mixed reality-based interface that significantly reduced user burden. The second year focused on multimodal observations in extended reality (XR) for creating human-readable representations that facilitate a deeper understanding of service robots' learning processes. Experiments with collaborative tasks between humans and robots in XR have demonstrated enhanced interaction effectiveness, enabling more intuitive and direct user involvement in the learning process of the robots through XR.
|
Strategy for Future Research Activity |
The final year will focus on the challenge of transforming complex latent spaces into intuitive representations within extended reality. The goal is to develop novel techniques that will allow users to visualize and interact with the latent space, thereby facilitating direct human intervention in the robot's learning process. The outcome is expected to enhance users' understanding and control over the knowledge formation in service robots.
|