ETVIS

General Chairs

Lewis Chuang, lewis.chuang@tuebingen.mpg.de
Michael Burch, m.burch@tue.nl
Kuno Kurzhals, kuno.kurzhals@visus.uni-stuttgart.de

Program

Note: Paper talks are 12 minute presentations with an additional 3 minutes for questions.

Saturday, June 16, 2018 (13:30 - 18:00), Room S305
13:30-13:50
Opening
Session 1: Visualization (Room S305) Session Chair: Kuno Kurzhals
13:50-14:35
Intuitive Visualization Technique to Support Eye Tracking Data Analysis: A User-Study
Vsevolod Peysakhovich, Christophe Hurter
Multiscale Scanpath Visualization and Filtering
Nils Rodrigues, Rudolf Netzel, Joachim Spalink, Daniel Weiskopf
The Hierarchical Flow of Eye Movements
Michael Burch
14:35-14:45
Break for Discussion
Session 2: Evaluation (Room S305) Session Chair: Lewis Chuang
14:45-15:30
The Influence of Anxiety on Visual Entropy of Experienced Drivers
Gisele Gotardi, Martina Navarro, Paula Polastri, Paulo Schor Schor, Dominic Orth, Raoul Oudejans, John van der Kamp, Geert Savelsbergh, Sérgio Rodrigues
Eye-Tracking Evaluation of 3D Thematic Maps
Stanislav Popelka
Visual Analysis of Eye Gazes to Assist Strategic Planning in Computer Games
Ayush Kumar, Michael Burch, Klaus Mueller
15:30-16:00
Coffee Break
Session 3: Applications (Room S305) Session Chair: Michael Burch
16:00-17:15
Visualizing Pilot Eye Movements for Flight Instructors
David Rudi, Peter Kiefer, Martin Raubal
Improving the Adaptive Event Detection Algorithm of Nyström and Holmqvist for Noisy Data
Benedict Fehringer
GaRSIVis: Improving the Predicting of Self-Interruption during Reading using Gaze Data
Jan Pilzer, Shareen Mahmud, Vanessa Putnam, Xinhong Liu, Tamara Munzner
Region of Interest Generation Algorithms for Eye Tracking Data
Wolfgang Fuhl, Thomas C Kübler, Hanna Brinkmann, Raphael Rosenberg, Wolfgang Rosenstiel, Enkelejda Kasneci
17:15-17:45
Panel Discussion
"How can visualization make a larger contribution to ETRA?"
Panelists: Enkelejda Kasneci (University of Tuebingen, DE), Peter Kiefer (ETH, Zurich, CH), Thies Pfeiffer (University of Bielefeld, DE), Michael Raschke (Blickshift GmbH, DE), Andrew Duchowski (Clemson University, US)
17:45-18:00
Closing Remarks & Best Paper

Proceedings

ETVIS '18- Proceedings of the 3rd Workshop on Eye Tracking and Visualization

Full Citation in the ACM Digital Library

SESSION: Visualization

Intuitive visualization technique to support eye tracking data analysis: a user-study

  • Vsevolod Peysakhovich
  • Christophe Hurter

While fixation distribution is conventionally visualized using heat maps, there is still a lack of a commonly accepted technique to visualize saccade distributions. Inspired by wind maps and the Oriented Line Integral Convolution (OLIC) technique, we visualize saccades by drawing ink droplets which follow the direction indicated by a flow direction map. This direction map is computed using a kernel density estimation technique over the tangent directions to each saccade gaze point. The image is further blended with the corresponding heat map. It results in an animation or a static image showing main directions of the transitions between different areas of interest. We also present results from a web-based user study where naive non-expert users where asked to identify the direction of the flow and simple patterns. The results showed that these visualizations can successfully be used to support visual analysis of the eye-tracking data. It also showed that the use of animation allows to ease the task and to improve the performance.

Multiscale scanpath visualization and filtering

  • Nils Rodrigues
  • Rudolf Netzel
  • Joachim Spalink
  • Daniel Weiskopf

The analysis of eye-tracking data can be very useful when evaluating controlled user studies. To support the analysis in a fast and easy fashion, we have developed a web-based framework for a visual inspection of eye-tracking data and a comparison of scanpaths based on filtering of fixations and similarity measures. Concerning the first part, we introduce a multiscale aggregation of fixations and saccades based on a spatial partitioning that reduces visual clutter of overlaid scanpaths without changing the overall impression of large-scale eye movements. The multiscale technique abstracts the individual scanpaths and allows an analyst to visually identify clusters or patterns inherent to the gaze data without the need for lengthy precomputations. For the second part, we introduce an approach where analysts can remove fixations from a pair of scanpaths in order to increase the similarity between them. This can be useful to discover and understand reasons for dissimilarity between scanpaths, data cleansing, and outlier detection. Our implementation uses the MultiMatch algorithm to predict similarities after the removal of individual fixations. Finally, we demonstrate the usefulness of our techniques in a use case with scanpaths that were recorded in a study with metro maps.

The hierarchical flow of eye movements

  • Michael Burch
  • Ayush Kumar
  • Klaus Mueller

Eye movements are composed of spatial and temporal aspects. Moreover, not only the eye movements of one subject are of interest, but a data analyst is more or less interested in the scanning strategies of a group of people in a condensed form. This data aggregation can provide useful insights into the visual attention over space and time leading to the detection of possible visual problems or design flaws in the presented stimulus. In this paper we present a way to visually explore the flow of eye movements, i.e., we try to bring a layered hierarchical structure into the spatio-temporal eye movements. To reach this goal, the stimulus is spatially divided into areas of interest (AOIs) and temporally or sequentially aggregated into time periods or subsequences. The weighted AOI transitions are used to model directed graph edges while the AOIs build the graph vertices. The flow of eye movements is naturally obtained by computing hierarchical layers for the AOIs while the downward edges indicate the hierarchical flow between the AOIs on the corresponding layers.

SESSION: Evaluation

The influence of anxiety on visual entropy of experienced drivers

  • Gisele Gotardi
  • Paulo Schor
  • John van der Kamp
  • Martina Navarro
  • Dominic Orth
  • Geert Savelsbergh
  • Paula F. Polastri
  • Raoul Oudejans
  • Sergio T. Rodrigues

This study tested the use of entropy to identify changes on behavior of drivers under pressure. Sixteen experienced drivers drove in a simulator wearing a head-mounted eye tracker under low-and high-anxiety conditions. Anxiety was induced by manipulating some psychological factors such as peer-pressure. Fixations transitions between AOIs (lane, speedometer and mirrors) were calculated through first-order transition matrix, transformed to Markov probability matrix and adjusted into the entropy equation. Drivers showed greater state-anxiety scores and higher mean heart rates following manipulation. Under anxiety, drivers showed higher visual entropy, indicating a more random scanning. The randomness implies into a poorer acquisition of information and may indicate an impaired top-down control of attention provoked by anxiety.

Eye-tracking evaluation of 3D thematic maps

  • Stanislav Popelka

Although many 3D thematic cartography methods exist, the effectiveness of their use is not known. The described experiment comprised two parts focusing on the evaluation of two 3D thematic cartography methods (Prism Map and Illuminated Choropleth Map) compared to a simple choropleth map. The task in both parts of the experiment was to determine which of the marked areas showed a higher value of the displayed phenomenon. The correctness of answers, response time and selected eye-tracking metrics were analysed. In the first part of the experiment, a higher number of correct answers was found for Prism Maps than for simple choropleth maps, but it required more time to solve the task. The Illuminated Choropleth Map showed a higher proportion of correct answers than a simple choropleth map. During evaluation of the eye-tracking metrics, a statistically significant difference was not found in most cases.

Visual analysis of eye gazes to assist strategic planning in computer games

  • Ayush Kumar
  • Michael Burch
  • Klaus Mueller

This work studies the use of a conventional eye tracking system for analysis of an online game player's thinking processes. For this purpose, the eye gaze data of several users playing a simple online turn-based checkers game were recorded and made available in real-time to gaze-informed players. The motivation behind this work is to determine if making the eye-gaze data available can help these players to predict the gaze-tracked opponent player's further moves, and also how this can be most effectively done. We also tested different orientations of the screen on which the gaze data were displayed. By our visual and algorithmic analysis we validated (1) that prediction is possible and (2) that accuracy highly depends on the moves of players throughout the game as well as on the screen orientation. We believe that our study has implications on visual problem solving in general, especially in collaborative scenarios.

Visualizing pilot eye movements for flight instructors

  • David Rudi
  • Peter Kiefer
  • Martin Raubal

The idea of using eye tracking technology in pilot training has been suggested and successfully applied in the past. At the same time, the possibilities of visualizing eye tracking data have strongly progressed. Nonetheless, little effort has been invested into exploring which type of eye tracking visualization flight instructors prefer for evaluating pilots' visual scanning strategies. This paper introduces ongoing research, which provides flight instructors with different eye tracking visualizations for assessing pilots' eye movements and evaluates those in an empirical study.

Improving the adaptive event detection algorithm of Nyström and Holmquist for noisy data

  • Benedict C. O. F. Fehringer

Detecting eye tracking events such as fixations and saccades is one of the first important steps in eye tracking research. The adaptive algorithm by Nyström and Holmqvist [2010] estimates thresholds by computing a "peak velocity detection threshold" (PT) that depends on the data's noise level. However, too high thresholds might result in only few detected saccades. The present study investigated a solution with an upper bound for PT. Fixations and saccades were computed for N = 68 participants who performed a fixation task and a visual detection test. The original version of the algorithm was compared with five versions utilizing upper bounds for PT (ranging from 100deg/sec to 300deg/sec) according to three predefined criteria. These criteria suggest an optimal upper bound at 200deg/sec for the utilized static and simple structured testing materials.

GaRSIVis: improving the predicting of self-interruption during reading using gaze data

  • Jan Pilzer
  • Shareen Mahmud
  • Vanessa Putnam
  • Xinhong Liu
  • Tamara Munzner

Gaze pattern data provides a promising opportunity to create a predictive model of self-interruption during reading that could support active interventions to keep a reader's attention at times when self-interruptions are predicted to occur. We present two systems designed to help analysts create and improve such a model. We present GaRSIVis, (Gaze Reading Self-Interruption Visualizer), that integrates a visualization front-end suitable for data cleansing and a prediction back-end that can be run repeatedly as the input data is iteratively improved. It allows analysts refining the predictive model to filter out unwanted parts of the gaze data that should not be used in the prediction. It relies on data gathered by GaRSILogger, which logs gaze data and activity associated with interruptions during on-screen reading. By integrating data cleansing and our prediction results in our visualization, we enable analysts using GaRSIVis to come up with a comprehensible way of understanding self-interruption from gaze related features.

Region of interest generation algorithms for eye tracking data

  • Wolfgang Fuhl
  • Thomas Kuebler
  • Hanna Brinkmann
  • Raphael Rosenberg
  • Wolfgang Rosenstiel
  • Enkelejda Kasneci

Using human fixation behavior, we can interfere regions that require to be processed at high resolution and where stronger compression can be favored. Analyzing the visual scan path solely based on a predefined set of regions of interest (ROIs) limits the exploration room of the analysis. Insights can only be gained for those regions that the data analyst considered worthy of labeling. Furthermore, visual exploration is naturally time-dependent: A short initial overview phase may be followed by an in-depth analysis of regions that attracted the most attention. Therefore, the shape and size of regions of interest may change over time. Automatic ROI generation can help in automatically reshaping the ROIs to the data of a time slice. We developed three novel methods for automatic ROI generation and show their applicability to different eye tracking data sets. The methods are publicly available as part of the EyeTrace software at http://www.ti.uni-tuebingen.de/Eyetrace.175L0.html

Sponsors

Institutional

Platinum

Gold

Silver