研究課題/領域番号 |
22K17981
|
研究種目 |
若手研究
|
配分区分 | 基金 |
審査区分 |
小区分61050:知能ロボティクス関連
|
研究機関 | 立命館大学 |
研究代表者 |
ElHafi Lotfi 立命館大学, 総合科学技術研究機構, 准教授 (90821554)
|
研究期間 (年度) |
2022-04-01 – 2025-03-31
|
研究課題ステータス |
交付 (2023年度)
|
配分額 *注記 |
4,290千円 (直接経費: 3,300千円、間接経費: 990千円)
2024年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2023年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2022年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
|
キーワード | Extended Reality / Human-Robot Interaction / Multimodal Learning / emergent reality / knowledge formation / multimodal learning / human-robot interaction / extended reality |
研究開始時の研究の概要 |
This proposal introduces the concept of Emergent Reality, a novel framework that combines multimodal unsupervised learning and human-robot interactions in extended reality to visualize the emergent phenomena derived from the robot's observations and intervene in its learning process.
|
研究実績の概要 |
Significant progress has been made in human-robot interactive learning within extended reality with two main achievements: 1) a mixed reality-based 6D-pose annotation system for robot manipulation in service environments, enhancing the accuracy of pose annotation and reducing positional errors, and 2) an interactive learning system for 3D semantic segmentation with autonomous mobile robots, improving segmentation accuracy in new environments and predicting new object classes with minimal additional annotations. Both achievements focused on creating human-readable representations that facilitate a deeper understanding of service robots' learning processes.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
The research is advancing smoothly, building upon the first year's development of a mixed reality-based interface that significantly reduced user burden. The second year focused on multimodal observations in extended reality (XR) for creating human-readable representations that facilitate a deeper understanding of service robots' learning processes. Experiments with collaborative tasks between humans and robots in XR have demonstrated enhanced interaction effectiveness, enabling more intuitive and direct user involvement in the learning process of the robots through XR.
|
今後の研究の推進方策 |
The final year will focus on the challenge of transforming complex latent spaces into intuitive representations within extended reality. The goal is to develop novel techniques that will allow users to visualize and interact with the latent space, thereby facilitating direct human intervention in the robot's learning process. The outcome is expected to enhance users' understanding and control over the knowledge formation in service robots.
|