研究実績の概要 |
The purpose of our research is to enhance human-computer interaction via the understanding of how users perform multiple gestures at the same time. We adopted two approaches: the first one focuses on the low-level mathematical analysis of the execution of two gestures at the same time, the second one focuses on the high-level interaction capabilities of such gesture combinations.
For the first approach, the derivation of a mathematical predictive model adapted to multiple gestures is still in progress. For the second approach, we refined our design framework to combine two-handed gestures to interact with augmented physical environments. To anticipate future modality combinations, we studied how the gaze modality could be used for capturing but also transmitting information on mobile devices. We have shown novel and significantly better results than that in previous work in this area. Lastly, we also analyzed thumb-input interaction on smartphones.
Our plan for future research is to focus on the mathematical predictive model of gesture combinations. This work will then open several research avenues for novel interaction techniques. We plan to start exploring combinations using gaze and hand or finger inputs.
|