• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Emergent Reality: Knowledge Formation from Multimodal Learning through Human-Robot Interaction in Extended Reality

Research Project

Project/Area Number 22K17981
Research Category

Grant-in-Aid for Early-Career Scientists

Allocation TypeMulti-year Fund
Review Section Basic Section 61050:Intelligent robotics-related
Research InstitutionRitsumeikan University

Principal Investigator

ElHafi Lotfi  立命館大学, 総合科学技術研究機構, 准教授 (90821554)

Project Period (FY) 2022-04-01 – 2025-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2024: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2023: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2022: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
KeywordsExtended Reality / Human-Robot Interaction / Multimodal Learning / emergent reality / knowledge formation / multimodal learning / human-robot interaction / extended reality
Outline of Research at the Start

This proposal introduces the concept of Emergent Reality, a novel framework that combines multimodal unsupervised learning and human-robot interactions in extended reality to visualize the emergent phenomena derived from the robot's observations and intervene in its learning process.

Outline of Annual Research Achievements

Significant progress has been made in human-robot interactive learning within extended reality with two main achievements: 1) a mixed reality-based 6D-pose annotation system for robot manipulation in service environments, enhancing the accuracy of pose annotation and reducing positional errors, and 2) an interactive learning system for 3D semantic segmentation with autonomous mobile robots, improving segmentation accuracy in new environments and predicting new object classes with minimal additional annotations. Both achievements focused on creating human-readable representations that facilitate a deeper understanding of service robots' learning processes.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

The research is advancing smoothly, building upon the first year's development of a mixed reality-based interface that significantly reduced user burden. The second year focused on multimodal observations in extended reality (XR) for creating human-readable representations that facilitate a deeper understanding of service robots' learning processes. Experiments with collaborative tasks between humans and robots in XR have demonstrated enhanced interaction effectiveness, enabling more intuitive and direct user involvement in the learning process of the robots through XR.

Strategy for Future Research Activity

The final year will focus on the challenge of transforming complex latent spaces into intuitive representations within extended reality. The goal is to develop novel techniques that will allow users to visualize and interact with the latent space, thereby facilitating direct human intervention in the robot's learning process. The outcome is expected to enhance users' understanding and control over the knowledge formation in service robots.

Report

(2 results)
  • 2023 Research-status Report
  • 2022 Research-status Report
  • Research Products

    (4 results)

All 2024 2022 Other

All Int'l Joint Research (1 results) Journal Article (3 results) (of which Int'l Joint Research: 3 results,  Peer Reviewed: 3 results)

  • [Int'l Joint Research] Karlstad University (KaU)(スウェーデン)

    • Related Report
      2023 Research-status Report
  • [Journal Article] Mixed Reality-based 6D-Pose Annotation System for Robot Manipulation in Retail Environments2024

    • Author(s)
      Carl Tornberg, Lotfi El Hafi, Pedro Miguel Uriguen Eljuri, Masaki Yamamoto, Gustavo Alfonso Garcia Ricardez, Jorge Solis, Tadahiro Taniguchi
    • Journal Title

      Proceedings of 2024 IEEE/SICE International Symposium on System Integration (SII 2024)

      Volume: - Pages: 1425-1432

    • DOI

      10.1109/sii58957.2024.10417443

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Interactive Learning System for 3D Semantic Segmentation with Autonomous Mobile Robots2024

    • Author(s)
      Akinori Kanechika, Lotfi El Hafi, Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi
    • Journal Title

      Proceedings of 2024 IEEE/SICE International Symposium on System Integration (SII 2024)

      Volume: - Pages: 1274-1281

    • DOI

      10.1109/sii58957.2024.10417237

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Multimodal Object Categorization with Reduced User Load through Human-Robot Interaction in Mixed Reality2022

    • Author(s)
      Nakamura Hitoshi、Hafi Lotfi El、Taniguchi Akira、Hagiwara Yoshinobu、Taniguchi Tadahiro
    • Journal Title

      2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

      Volume: - Pages: 2143-2150

    • DOI

      10.1109/iros47612.2022.9981374

    • Related Report
      2022 Research-status Report
    • Peer Reviewed / Int'l Joint Research

URL: 

Published: 2022-04-19   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi