研究課題/領域番号 |
26730095
|
研究機関 | 慶應義塾大学 |
研究代表者 |
クンツェ カイ 慶應義塾大学, メディアデザイン研究科, 特任准教授 (00648040)
|
研究期間 (年度) |
2014-04-01 – 2016-03-31
|
キーワード | eye gaze / user comprehension / eye wear / glasses |
研究実績の概要 |
This report summarizes our progress towards determining the user’s domain expertise using eye movement analysis. In summary, we validate the correlations between eye gaze and comprehension. We explore more smart eyewear as a platform for user comprehension estimation.
We conducted 3 lab experiments: near infrared spectroscopy (FNIRS), Stationary and mobile eye tracker experiments. The FNIRS Experiments explore directly the relation between brain region activation and eye gaze. We use reading, problem solving and find tasks. From the results there are strong correlations between blink frequency, pupil diameter and cognitive load. We can predict the perceived difficulty level of the user for the task using pupil diameter and blink frequency. We now also explore more about saccade features. The Stationary Experiments focus on reading comprehension so far. Users read several Japanese and English texts and measure their comprehension level with questions. We evaluate eye gaze features related to incomprehension. From an initial analysis fixation duration and average saccade length are significant.
We collaborate with Tilman Dingler from Stuttgart University to design interfaces increase comprehension while reading (“speed” reading interfaces). We apply our comprehension detection on his data. We succeeded in design/build an interface attempting to maximize speed and comprehension (compared to other techniques) and explored user acceptance also in relation to the document types consumed with these techniques.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
The stationary lab experiments proceed very smoothly. Although we might not be able to identify general eye gaze patterns related to expertise that make classification possible for a lot of tasks, for selected cognitive tasks (e.g. reading problem solving, logic exercises, math problems) we can even design interventions to increase comprehension levels or let comprehension levels stay the same but increasing the speed. We have the first success with a speed reading interface. We explore more interfaces and interventions to increase especially reader comprehension.
The biggest issue so far is eye gaze estimation on unmodified tablets. Although we have great initial results, getting a stable screen coordinate of eye gaze for the general user seems too difficult for this generation tablets due to lighting changes and user dependence (different eye colors, head movements). We will continue to explore the problem. Yet, we are looking for other technologies to help us with comprehension level estimation for the user. We explored already two ways. First we use a low cost eye tracker that can be used on tablets (Tobii EyeX ~Yen 8000,-). With initial tests for the device for the unconstrained experiments, we are quite pleased (high sampling rate and decent resolution). The second technology we explored is smart eye wear. Eye glasses with embedded sensing to recognize and predict user comprehension.
|
今後の研究の推進方策 |
Related to the initial results of our research, we altered the direction of the project slightly to accommodate for new insights. We restrict the tasks to several specific tasks (not general expertise estimation), that work well in predicting expertise (e.g. reading, problem solving). From the stationary experiments we realize that user’s domain expertise is influenced by a lot of parameters: the user’s fatigue, attention and concentration levels, as well as environmental situations. We look more into how to measure cognitive load using unobtrusive sensing, as it is a good indication of a user’s focus and attention. We believe attention tracking (together with the material a user focuses on) over a day can give us a good estimate of a user’s expertise.
We implemented basic reading detection on smart eyewear with electrooculography. This means we can get quantified data about reading (e.g. how fast you read, how much you read). In a next step we will evaluate how these correlate with the users expertise in certain fields. The interesting point about smart eye wear is that we can evaluate user understanding even if they don’t look on a tablet or screen. We will evaluate reading volume in relation to academic performance. According to related work in cognitive science, reading volume should have direct correlations with academic performance (critical thinking skills and comprehension) and with language skills. If this is established, we will explore interactions and feedback to increase reading volume in students and general attention tracking.
|
次年度使用額が生じた理由 |
As there were problems with the eye gaze detection on tablets. We just bought one tablet so far to test the software and ran into a lot of user dependent problems. This lead us to focus more on smart eye wear and other ways to detect user comprehension. We did not buy additional tablets as suggested and could not start some of the pre-tests (no need to pay experiment participants). Therefore, we did not spend as much money as we thought. Also as J!NS provides us with some hardware and prototype devices, there were so far only little costs. However, next we also should explore additional sensing modalities.
|
次年度使用額の使用計画 |
For the conduction of the long term study we still need either tablets or smart phones and we hope we can conduct a larger study (meaning we need more devices, we will use the money saved in the first half of the project for this). The MEME prototypes should be provided by J!NS free of cost, yet we might want to add additional sensors to the glasses that are promising for comprehension recognition (e.g. temperature sensor for nose, heart rate). There are additional costs related to sensing. As we collaborate with Dr. Andreas Bulling from the Max Plank Institute, Saarbruecken, and Tilman Dingler University Stuttgart we added 2 x international travel to discuss the experiments. They are experts in mobile sensing and HCI technologies related to knowledge acquisition and reading.
|