2017 Fiscal Year Annual Research Report
Distant interaction applied to augmented physical objects with and without augmented reality
Project/Area Number |
17F17726
|
Research Institution | Kochi University of Technology |
Principal Investigator |
任 向実 高知工科大学, 情報学群, 教授 (00287442)
|
Co-Investigator(Kenkyū-buntansha) |
DELAMARE WILLIAM 高知工科大学, 総合研究所, 外国人特別研究員
|
Project Period (FY) |
2017-11-10 – 2019-03-31
|
Keywords | HCI / Gesture / Predictive Model / Gaze / Multimodality |
Outline of Annual Research Achievements |
The purpose of our research is to enhance human-computer interaction via the understanding of how users perform multiple gestures at the same time. We adopted two approaches: the first one focuses on the low-level mathematical analysis of the execution of two gestures at the same time, the second one focuses on the high-level interaction capabilities of such gesture combinations.
For the first approach, we first replicated an experiment from previous work (hand & wrist gestures) to study the results in details. We found a correlation between velocity profiles and how two gestures can be performed in parallel. We were hence preparing multiple experiments to validate if our findings hold for other combinations (e.g., hand and finger). We also defined a mathematical model to optimize hierarchical structures for command selection. This model can be adapted to any context, from software linear menus to gesture interaction. For the second approach, we first established a design framework to combine two-handed gestures to interact with augmented physical environments. This framework can inform interaction designers as to which combinations are efficient and preferred by end-users. We also started exploring how to use gaze to perceive, but also to input information. The end-goal is to use the gaze modality in efficient future combinations (with hand or finger for instance).
The next steps consist in (a) validating the model for gesture combinations with other modalities, (b) finalizing our design framework, and (c) finishing the studies about the use of the gaze modality.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
Our methodology based on low-level and high-level approaches allowed us to explore several aspects of the research topic. Indeed, we anticipated the complexity of validating our predictive model for multiple input gesture combinations. For this reason, we also started exploring other approaches to contribute in this area. As such, we can confirm that the current status of our research is generally progressing well, with already one international paper accepted by a top-tier conference (ACM CHI), and two other research avenues (design framework and the study of the gaze modality) that will also lead to two major publications in the upcoming year. When the model will be completed, then the status will be progressing ‘beyond the original plan’ thanks to the additional parallel contributions. In addition, since we started exploring other input modalities, we can confirm the possibilities to tackle future work as soon as possible. This can be seen as a ‘beyond the original plan’ option.
|
Strategy for Future Research Activity |
Our plan for future research is to focus on the mathematical predictive model for gesture combinations. We anticipate the risk that further studies can reveal flaws in our model - or invalidate it - and hence require more detailed analysis. For this reason, we plan to provide several other high-level contributions (interaction capabilities) on the theme of gesture combinations. For this, we will initiate publications about gaze-only interaction on which future work will build new combinations (e.g., gaze and hand or finger).
First, we will refine our design framework to make it final before publication (target journal: International Journal of Human-Computer Studies (IJHCS)). Second, we will conclude the study of gaze interaction to capture information: the main experiment is nearly finished (80% completed) and the writing in progress (target conference: MobileHCI 2019). Lastly, we will also conclude the studies (3 in total, 1 completed) about the use of gaze as an input modality (note: we will focus on the more advanced research avenues first for publication purpose).
|