研究課題/領域番号 |
22K17981
|
研究種目 |
若手研究
|
配分区分 | 基金 |
審査区分 |
小区分61050:知能ロボティクス関連
|
研究機関 | 立命館大学 |
研究代表者 |
ElHafi Lotfi 立命館大学, 総合科学技術研究機構, 助教 (90821554)
|
研究期間 (年度) |
2022-04-01 – 2025-03-31
|
研究課題ステータス |
交付 (2022年度)
|
配分額 *注記 |
4,290千円 (直接経費: 3,300千円、間接経費: 990千円)
2024年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2023年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2022年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
|
キーワード | Extended Reality / Human-Robot Interaction / Multimodal Learning / emergent reality / knowledge formation / multimodal learning / human-robot interaction / extended reality |
研究開始時の研究の概要 |
This proposal introduces the concept of Emergent Reality, a novel framework that combines multimodal unsupervised learning and human-robot interactions in extended reality to visualize the emergent phenomena derived from the robot's observations and intervene in its learning process.
|
研究実績の概要 |
A human-robot interface using mixed reality (MR) was developed to reduce the user's burden during the interactive, multimodal, and on-site teaching of new knowledge to service robots. The effectiveness of the interface was evaluated using the System Usability Scale (SUS) and NASA Task Load Index (NASA-TLX) in three experimental scenarios: 1) no sharing of inference results with the user, 2) sharing inference results through voice dialogue (baseline), and 3) sharing inference results using the MR interface (proposed). The MR interface significantly reduced temporal, physical, and mental burdens compared to voice dialogue with the robot. The results were presented at IEEE/RSJ IROS 2022 and published in the Nikkan Kogyo Shimbun newspaper.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
The research has been progressing rather smoothly. In the first year, an MR interface was developed and tested for teaching new objects to service robots under human supervision, demonstrating a significant reduction in user burden compared to natural speech or gestures. For the second-year plan, the focus is on displaying multimodal observations made by robots in extended reality (XR) using human-readable representations during service interactions. To achieve this, a collaborative task between a human and a robot will be conducted, involving bidirectional sharing in XR of first-person observations of the service environment. This research is expected to further enhance the effectiveness of human-robot interaction.
|
今後の研究の推進方策 |
In the upcoming research stages, the focus will be on exploring methods for projecting high-dimensional latent structures into extended reality (XR) to provide insights into the learning process of both humans and robots. This will involve developing novel techniques for transforming complex data into intuitive, human-readable representations in XR. The aim is to enable direct human intervention in a robot's probabilistic learning process in XR by visualizing the latent space on top of real and virtual spaces, allowing users to better understand and interact with the robot's decision-making processes.
|