Tutorials
Deep Learning in the Eye Tracking World
Abstract. Recently deep learning has become a hype word in computer science. Many problems, which till now could be solved only using sophisticated algorithms, can be now solved with specially developed neural networks.
Deep learning also becomes more and more popular in the eye tracking world. It may be used in any place where some kind of classification, clustering or regression is needed. The tutorial aims to show the potential applications (like calibration, event detection, gaze data analysis and so on), and – what is more important – to show how to apply deep learning frameworks in such research.
There is a common belief that to use neural networks a strong mathematical background is necessary as there is much theory which must be understood before starting working. There is also a belief that, because most deep learning frameworks are just libraries in programming languages, it is necessary to be a programmer and have knowledge of the programming language that is used.
While both abilities are beneficial, because they may help in achieving better results, this tutorial aims to prove that deep networks may be used even by people who know only a little about the theory. I will show you ready-to-use networks with exemplary eye movement datasets and try to explain the most critical issues which you will have to solve when preparing your own experiments. After the tutorial, you will probably not become an expert in deep learning, but you will know how to use it in practice with your eye movement data.
Audience. The tutorial is addressed to every person interested in deep learning; no special skills are required apart from some knowledge about eye tracking and eye movement analysis. However, minimal programming skills are welcome and may help in better understanding the problem.
Scope. This tutorial will include: (1) gentle introduction to machine learning, classification and regression problems, (2) introduction to neural networks, (3) explanation of Convolutional Neural Networks and its applications to eye movement data, (4) Recurrence Neural Networks and its possible usages. The tutorial will NOT include the detailed mathematical explanation of neural network architecture and algorithms. All subjects will be explained with simple try-on examples using real eye movement datasets.
Bio. Dr. Pawel Kasprowski is an Assistant Professor at Institute of Informatics, Silesian University of Technology, Poland. He received his Ph.D. in Computer Science in 2004 under the supervision of Prof. Jozef Ober – one of the precursors of eye tracking. He has experience in both eye tracking and data mining. His primary research interest includes using data mining methods to analyze eye movement signal. Dr. Pawel Kasprowski teaches data mining at the University as well as during commercial courses. In the same time, he is an author of numerous publications concerning eye movement analysis.
Additional information for prospective participants: http://www.kasprowski.pl/tutorial/
Discussion and standardisation of the metrics for eye movement detection
Abstract. By now, a vast number of algorithms and approaches for detecting various eye movements (fixations, saccades, PSO, pursuit, OKN, etc.) have been proposed and evaluated by researchers in the field. The reported results are not always directly comparable and easily interpretable, even by experts. Part of this problem lies in the diversity of the metrics that are used to test the algorithms.
The multitude of metrics reported in the literature is potentially confusing both to the researchers who want to join the field and to the established groups. Firstly, there is a number of sample-level measures: Cohen's kappa values, sensitivity/specificity/F1 scores, and accuracy or disagreement rates. Secondly, a growing number of event-level measures exist: average statistics of the "true" and detected events (duration, amplitude, etc.), quantitative and qualitative scores proposed by Komogortsev et al. [2010], different ways of computing F1 scores [Hooge et al. 2018, Zemblys et al. 2018, Startsev et al. 2018], variations of the Cohen's kappa [Zemblys et al. 2018, Startsev et al. 2019], temporal offset measures of Hooge et al. [2018], average intersection-over-union ratios [Startsev et al. 2018], and Levenshtein distance between event sequences [Zemblys et al. 2018]. Almost all of the metrics listed above can be computed for all eye movement classes taken together or for each considered class in isolation.
Some aspects of these evaluation measures (especially on the level of events) contribute to their interpretability, bias, and suitability for various purposes and testing scenarios (e.g. whether expert manual annotations are available for comparison, or whether the stimuli were synthetically generated or recorded in naturalistic conditions). With the advent of machine learning-based models, the choice of a metric, a loss function, or a set of those should be motivated not just by differentiating between a handful of algorithms, but also by the metric's ability to guide the training process over thousands of epochs.
Right now, there is no clear-cut way of choosing a suitable metric for the problem of eye movement detection. Additionally, the set-up of an eye tracking experiment has a bearing on the applicable evaluation strategies. In this tutorial, we intend to provide an in-detail discussion of existing metrics, which would supply both theoretical and practical insights. We will illustrate our recommendations and conclusions through examples and experimental evidence. This tutorial aims to facilitate discussion and stimulate the researchers to employ uniform and well-grounded evaluation strategies.
Scope. The tutorial is aiming to provide its audience with a practice-oriented overview of the evaluation metrics that can be used in the field of eye movement detection, covering a wide variety of set-ups, such as eye movements with synthetic and naturalistic stimuli, in the presence or absence of manual annotations, as well as different purposes of the evaluation (selecting the best algorithm for automatic detection; finding systematic biases in the annotations by different experts; training a machine learning model) and evaluated entities (i.e. individual samples or whole events). The presentations will give recommendations for evaluation strategy choices for different scenarios, as well as support the discussion of various metrics by examples.
Audience.Researchers involved in eye movement detection (or even those who use existing detectors to make sense of their data) could benefit from the tutorial regardless of their background, either by discovering something new about the metrics they have or have not used before, or by contributing to the discussion and sharing their experiences.
Bio. Mikhail Startsev is a PhD student at the Technical University of Munich (TUM), Germany and a member of an International Junior Research Group "Visual Efficient Sensing for the Perception-Action Loop" (VESPA) under the supervision of Michael Dorr. He received his Diplom degree in Computational Mathematics and Informatics from the Lomonosov Moscow State University (LMSU), Russia, in 2015, where he was a member of the Graphics and Media Lab. Mikhail’s research is centred around the human visual system, with a particular emphasis on the eye movements and saliency modelling, with several publications in human and computer vision-related conferences and journals.
Dr. Raimondas Zemblys is currently a researcher at Siauliai University (Lithuania) and research engineer at Smart Eye AB (Sweden). His main research interests are eye-tracking methodology, eye-movement data quality, event detection and applications of deep learning for eye-movement analysis. He received his PhD in Informatics Engineering from Kaunas University of Technology in 2013, worked as a postdoc researcher at Lund University in 2013-2015 and Michigan State University in 2017-2018.
Gaze Analytics Pipeline
Abstract. This tutorial gives a short introduction to experimental design in general and with regard to eye tracking studies in particular. Additionally, the design of three different eye tracking studies (using stationary as well as mobile eye trackers) will be presented and the strengths and limitations of their designs will be discussed. Further, the tutorial presents details of a Python-based gaze analytics pipeline developed and used by Prof. Duchowski and Ms. Gehrer. The gaze analytics pipeline consists of Python scripts for extraction of raw eye movement data, analysis and event detection via velocity-based filtering, collation of events for statistical evaluation, analysis and visualization of results using R. Attendees of the tutorial will have the opportunity to run the scripts of an analysis of gaze data collected during categorization of different emotional expressions while viewing faces. The tutorial covers basic eye movement analytics, e.g., fixation count and dwell time within AOIs, as well as advanced analysis using gaze transition entropy. Newer analytical tools and techniques such as microsaccade detection and the Index of Pupillary Activity will be covered with time permitting.
Scope and Audience. The tutorial welcomes attendees at all levels of experience and expertise, from those just beginning to study eye movements and interested in the basics of experimental design to those well practiced in the profession who might wish to consider adopting use of Python and R scripts, possibly wishing to contribute to, expand on, and improve the pipeline.
Bio. Dr. Duchowski is a professor of Computer Science at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Burnaby, Canada, and doctorate (1997) from Texas A&M University, College Station, TX, both in Computer Science. His research and teaching interests include visual attention and perception, eye tracking, computer vision, and computer graphics. He is a noted research leader in the field of eye tracking, having produced a corpus of papers and a monograph related to eye tracking research, and has delivered courses and seminars on the subject at international conferences. He maintains Clemson's eye tracking laboratory, and teaches a regular course on eye tracking methodology attracting students from a variety of disciplines across campus.
Nina Gehrer is a clinical psychologist who is currently working on her PhD thesis at the University of Tübingen, Germany, since she received her master's degree in 2015. Her main research interest lies in studying face and emotion processing using eye tracking and a preferably wide range of analytic methods. As a clinical psychologist, she is particularly interested in possible alterations related to psychological disorders that could underlie associated deficits in social information processing. She began working with Prof. Duchowski in 2016. Since then, they have enhanced and implemented his gaze analytics pipeline in the analysis of several eye tracking studies involving face and emotion processing. Recently, they have started to extend their research to gaze patterns during social interactions.
Eye Tracking in the Study of Developmental Conditions: A Computer Scientists Primer
Abstract. Children with developmental conditions, such as autism, genetic disorders, and fetal alcohol syndrome, present with complex etiologies and can incur significant challenges throughout their life. Especially in very young children, heterogeneity across and within diagnostic categories makes uniform application of standard assessment methods, that often rely on assumptions of communicative or other developmental abilities, difficult. Eye tracking has emerged as a powerful tool to study both the mechanistic underpinnings of atypical development as well as facets of cognitive and attentional development that may be of clinical and prognostic value. In this tutorial we discuss the challenges and approaches associated with studying developmental conditions using eye tracking. Using autism spectrum disorder (ASD) as a model, we discuss the interplay between clinical facets of conditions and studies and techniques used to probe neurodevelopment.
Scope and Audience. This tutorial is geared towards engineers and computer scientists who may be interested in the variety of ways eye tracking can be used in the study of developmental mechanism or for the development of clinically-relevant methods, but does not assume deep knowledge of eye tracking hardware, algorithms, or engineering-focused literature. Similarly, the tutorial will be broadly accessible, assuming limited or no knowledge of developmental conditions, clinical research, and/or autism.
Bio. Frederick Shic, Ph.D. is an Associate Professor of Pediatrics at the University of Washington and an Investigator at Seattle Children's Research Institute's Center for Child Health, Behavior and Development. Dr. Shic has been an autism researcher for 15 years and, as a computer scientist by training, brings an interdisciplinary perspective to early developmental, therapeutic, and phenotyping research. Dr. Shic leads the Seattle Children's Innovative Technologies Laboratory (SCITL), a lab focused on advancing and refining technology-based tools, including eye tracking, functional near infrared spectroscopy, robots, mobile apps, and video games. His goals are to understand lifespan trajectories leading to heterogeneous outcomes in ASD, and to develop methods for positively intercepting these trajectories. To enable this, he focuses on big data perspectives of phenotypic variation, biomarker discovery enabled via technology, and rapid, adaptable, evolving frameworks for outcomes research applicable to diverse populations. His current and prior work, funded by NIMH, Simons Foundation, and Autism Speaks, includes developmental, psychological, and applied autism research as well as methods engineering aimed at creating and refining analytical and predictive techniques. Previously, he was an engineering undergraduate at Caltech, a Sony PlayStation video game programmer, a magnetic resonance spectroscopy brain researcher, and a graduate student at Yale Computer Science's Social Robotics Lab. It was during this graduate work when, needing child gaze patterns to program an attention system for a baby-emulating robot, he was first introduced to autism research at the Yale Child Study Center. He continued this work as an NIMH T32 postdoc in Childhood Neuropsychiatric Disorders and then as an Assistant Professor at the Yale Child Study Center.