Best Paper

Pupillary Light Reflex Correction for Robust Pupillometry in Virtual Reality

Marie Eckert (International Audio Laboratories Erlangen, Fraunhofer IIS, Erlangen, Germany), Thomas Robotham (International Audio Laboratories Erlangen, Fraunhofer IIS, Erlangen, Germany), Emanuël A. P. Habets (International Audio Laboratories Erlangen, Fraunhofer IIS, Erlangen, Germany) Olli S. Rummukainen (International Audio Laboratories Erlangen, Fraunhofer IIS, Erlangen, Germany)

Best Short Paper

A Holographic Single-Pixel Stereo Camera Sensor for Calibration-free Eye-Tracking in Retinal Projection Augmented Reality Glasses

Eye-tracking is a key technology for future retinal projection based AR glasses as it enables techniques such as foveated rendering or gaze-driven exit pupil steering, which both increases the system’s overall performance. However, two of the major challenges video oculography systems face are robust gaze estimation in the presence of glasses slippage, paired with the necessity of frequent sensor calibration. To overcome these challenges, we propose a novel, calibration-free eye-tracking sensor for AR glasses based on a highly transparent holographic optical element (HOE) and a laser scanner. We fabricate a segmented HOE generating two stereo images of the eye-region. A single-pixel detector in combination with our stereo reconstruction algorithm is used to precisely calculate the gaze position. In our laboratory setup we demonstrate a calibration-free accuracy of 1.35° achieved by our eye-tracking sensor; highlighting the sensor’s suitability for consumer AR glasses.

Assigned: 5
Slot: 6/09 AM

Best Student Short Paper

Real-time head-based deep-learning model for gaze probability regions in collaborative VR

Eye behavior has gained much interest in the VR research community as an interactive input and support for collaboration. Researchers used head behavior and saliency to implement gaze inference models when eye-tracking is missing. However, these solutions are resource-demanding and thus unfit for untethered devices, and their angle accuracy is around 7°, which can be a problem in high-density informative areas. To address this issue, we propose a lightweight deep learning model that generates the probability density function of the gaze as a percentile contour. This solution allows us to introduce a visual attention representation based on a region rather than a point. In this way, we manage the trade-off between the ambiguity of a region and the error of a point. We tested our model in untethered devices with real-time performances; we evaluated its accuracy, outperforming our identified baselines (average fixation map and head direction).

Assigned: 1
Slot: 6/10 PM

Best Technical Abstract

An Eye Opener on the Use of Machine Learning in Eye Movement Based Authentication

The viability and need for eye movement-based authentication has been well established in light of the recent adoption of Virtual Reality headsets and Augmented Reality glasses. Previous research has demonstrated the practicality of eye movement-based authentication, but there still remains space for improvement in achieving higher identification accuracy. In this study, we focus on incorporating linguistic features in eye movement based authentication, and we compare our approach to authentication based purely on common first-order metrics across 9 machine learning models. Using GazeBase, a large eye movement dataset with 322 participants, and the CELEX lexical database, we show that AdaBoost classifier is the best performing model with an average F1 score of 74.6%. More importantly, we show that the use of linguistic features increased the accuracy of most classification models. Our results provide insights on the use of machine learning models, and motivate more work on incorporating text analysis in eye movement based authentication.

Assigned: 13
Slot: 6/10 PM

Best Paper at COGAIN

Attention of Many Observers Visualized by Eye Movements

Interacting with a group of people requires to direct the attention of the whole group, thus requires feedback about the crowd’s attention. In face-to-face interactions, head and eye movements serve as indicator for crowd attention. However, when interacting online, such indicators are not available. To substitute this information, gaze visualizations were adapted for a crowd scenario. We developed, implemented, and evaluated four types of visualizations of crowd attention in an online study with 72 participants using lecture videos enriched with audience’s gazes. All participants reported increased connectedness to the audience, especially for visualizations depicting the whole distribution of gaze including spatial information. Visualizations avoiding spatial overlay by depicting only the variability were regarded as less helpful, for real-time as well as for retrospective analyses of lectures. Improving our visualizations of crowd attention has the potential for a broad variety of applications, in all kinds of social interaction and communication in groups.

Introducing a Real-Time Advanced Eye Movements Analysis Pipeline

Real-Time Advanced Eye Movements Analysis Pipeline (RAEMAP) is an advanced pipeline to analyze traditional positional gaze measurements as well as advanced eye gaze measurements. The proposed implementation of RAEMAP includes real-time analysis of fixations, saccades, gaze transition entropy, and low/high index of pupillary activity. RAEMAP will also provide visualizations of fixations, fixations on AOIs, heatmaps, and dynamic AOI generation in real-time. This paper outlines the proposed architecture of RAEMAP.

Assigned: 20
Slot: 6/09 PM