Establishing a Ground-Truth for Eye Tracking
Abstract. Calibration and performance evaluation of current eye trackers typically rely on comparing known target positions to measured gaze directions while a participant is fixating on those targets. A mapping function or geometric eye model is then optimized based on this correspondence, essentially treating the calibration targets as the "ground truth" for each gaze direction. While this has worked reasonably well to achieve current calibration accuracies of around 0.5 degrees, trying to optimize beyond this point reveals that calibration targets are more a self-report measure than real ground truth: Participant compliance, fixational eye movements such as drifts and micro-saccades, as well as the accuracy of positioning the fovea or preferred viewing location itself all contribute to uncertainty in the “ground-truth” target location and thus form a lower bound for tracking accuracy.
Many applications of eye tracking for virtual and augmented reality will require higher tracking fidelity than what is currently available. In this workshop, we will explore the hypothesis that measuring ground-truth gaze in conjunction with a second, to-be-evaluated eye tracking system can help boost model and tracking accuracy in the long-term. We define ground-truth as the mapping of real-world content onto the retinal locus of fixation. Speakers will present different approaches from academia and industry, followed by a panel discussion on the viability and possibilities of ground-truth eye tracking approaches. To continue the conversation after the workshop, we invite participants to a Facebook-sponsored social after the main conference events.
Recording and analyzing gaze during website interactions with EyeLink eye trackers
Abstract. Eye tracking can be a powerful tool in usability research and graphical interface design, providing important information concerning where users direct their attention to websites and applications they are interacting with. In website usability, for example, eye tracking can reveal important information about which areas of a web page are read, which areas are skipped, or even which areas increase cognitive workload. In traditional eye tracking, the researcher has tight control over what is shown, where it is shown and when it is shown. Analysis of the gaze data typically involves mapping gaze up with various areas of interest, and reporting measures such as fixation count and dwell time. Eye tracking for usability research, however, introduces a number of complications that traditional stimulus presentation and analysis software do not always deal with adequately. For example, the participant themselves determines what is shown, and when /where it is shown. As such, an accurate recording of the screen is critical. Web pages often contain dynamic (moving) content, and can themselves be scrolled, adding further complications to traditional analysis approaches, in which interest areas are typically static. This workshop will introduce new recording and analysis software from SR Research that allows researchers to record and quantify participants gaze whilst they interact with websites. Key features include screen and audio recording, keypress and mouse logging, the ability to provide a live preview of the gaze data during recording, automatic scroll compensation at the analysis stage, automatic data segmentation and navigation based on URLs, data aggregation from multiple participants, mouse event data visualization and extraction, and new report variables specific to web page tracking.
Tobii Pro solutions for VR experiments
Abstract. Whereas experiments in Virtual Reality (VR) have grown much more common over the last years, they are still not as common nor as well-supported as standard screen-based experiments. The choice of research tool to choose goes hand in hand with the research question of interest, but today the same question can be approached from different angles using screen-based experiments, glasses-based experiments, 360° VR media, and full 3D VR environments. The choice of what media to use in a VR experiment is determined by the researcher’s desired level of control of the stimulus, how representative it is supposed to be to a non-artificial world, what metrics that will be used, and the time and resources available for the project.
This workshop will present Tobii Pro’s solutions for conducting VR experiments, and will go through how areas of interests, trials, moving AOIs, fixation classification, and other concepts, are handled in the experiment workflow. It will provide an understanding of what parts are taken care of by the software, and what is expected of the researcher themselves. Workshop attendees will get a chance to try VR hardware and the software solutions themselves.