• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2022 Fiscal Year Research-status Report

Emergent Reality: Knowledge Formation from Multimodal Learning through Human-Robot Interaction in Extended Reality

Research Project

Project/Area Number 22K17981
Research InstitutionRitsumeikan University

Principal Investigator

ElHafi Lotfi  立命館大学, 総合科学技術研究機構, 助教 (90821554)

Project Period (FY) 2022-04-01 – 2025-03-31
KeywordsExtended Reality / Human-Robot Interaction / Multimodal Learning
Outline of Annual Research Achievements

A human-robot interface using mixed reality (MR) was developed to reduce the user's burden during the interactive, multimodal, and on-site teaching of new knowledge to service robots. The effectiveness of the interface was evaluated using the System Usability Scale (SUS) and NASA Task Load Index (NASA-TLX) in three experimental scenarios: 1) no sharing of inference results with the user, 2) sharing inference results through voice dialogue (baseline), and 3) sharing inference results using the MR interface (proposed). The MR interface significantly reduced temporal, physical, and mental burdens compared to voice dialogue with the robot. The results were presented at IEEE/RSJ IROS 2022 and published in the Nikkan Kogyo Shimbun newspaper.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

The research has been progressing rather smoothly. In the first year, an MR interface was developed and tested for teaching new objects to service robots under human supervision, demonstrating a significant reduction in user burden compared to natural speech or gestures. For the second-year plan, the focus is on displaying multimodal observations made by robots in extended reality (XR) using human-readable representations during service interactions. To achieve this, a collaborative task between a human and a robot will be conducted, involving bidirectional sharing in XR of first-person observations of the service environment. This research is expected to further enhance the effectiveness of human-robot interaction.

Strategy for Future Research Activity

In the upcoming research stages, the focus will be on exploring methods for projecting high-dimensional latent structures into extended reality (XR) to provide insights into the learning process of both humans and robots. This will involve developing novel techniques for transforming complex data into intuitive, human-readable representations in XR. The aim is to enable direct human intervention in a robot's probabilistic learning process in XR by visualizing the latent space on top of real and virtual spaces, allowing users to better understand and interact with the robot's decision-making processes.

  • Research Products

    (1 results)

All 2022

All Journal Article (1 results) (of which Int'l Joint Research: 1 results,  Peer Reviewed: 1 results)

  • [Journal Article] Multimodal Object Categorization with Reduced User Load through Human-Robot Interaction in Mixed Reality2022

    • Author(s)
      Lotfi El Hafi, Youwei Zheng, Hiroshi Shirouzu, Tomoaki Nakamura, Tadahiro Taniguchi
    • Journal Title

      Proceedings of 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)

      Volume: - Pages: 2143-2150

    • DOI

      10.1109/IROS47612.2022.9981374

    • Peer Reviewed / Int'l Joint Research

URL: 

Published: 2023-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi