Accepted Tutorials

Block: Methodology & Statistics
Length: Half Day
Abstract: In this two-session course, attendees will learn how to conduct empirical research in eye tracking and human-computer interaction (HCI). This course delivers an A-to-Z tutorial on designing a user study and demonstrates how to write a successful ETRA or HCI paper. It would benefit anyone interested in conducting a user study or writing an empirical research paper. Only a general knowledge of research in human-computer interaction is required.
Scope:
SESSION ONE:
- What is empirical research and what is the scientific method?
- Discovering and refining topics suitable for research in eye tracking
- Formulating "testable" research questions
- How to design an experiment (broadly speaking) to answer research questions
- Parts of an experiment (independent variables, dependent variables, counterbalancing, etc.)
- Group participation in a real experiment

SESSION TWO
- Results and discussion of the experiment from Session One
- Experiment design issues ("within subjects" vs. "between subjects" factors, internal validity, external validity, counterbalancing test conditions, etc.)
- Data analyses (main effects and interaction effects, requirements to establish cause and effect relationships, etc.)
- How to organize and write a successful research paper (including suggestions for style and approach, as per conference submissions)
Audience: This course caters to attendees who are motivated to learn about and use empirical research methods in eye tracking and HCI research. Specifically, it is for those in academia or industry who evaluate interaction techniques for eye tracking using quantitative methods.
Teachers: Scott MacKenzie's research is in human-computer interaction with an emphasis on experimental methods, interaction devices and techniques, text entry, eye tracking, touch-based input, language modeling, accessible computing, gaming, and mobile computing. He has more than 180 peer-reviewed publications in the field of Human-Computer Interaction (including more than 40 from the ACM's annual SIGCHI conference). He presented the opening keynote ETRA in 2010. Home page: http://www.yorku.ca/mack/
Email: mack@cse.yorku.ca
Block: Methodology & Statistics
Length: Half Day
Abstract: This tutorial gives a short introduction to experimental design in general and with regard to eye tracking studies in particular. Additionally, the design of three different eye tracking studies (using stationary as well as mobile eye trackers) will be presented and the strengths and limitations of their designs will be discussed. Further, the tutorial presents details of a Python-based gaze analytics pipeline developed and used by Drs. Duchowski and Gehrer. The gaze analytics pipeline consists of Python scripts for extraction of raw eye movement data, analysis and event detection via velocity-based filtering, collation of events for statistical evaluation, analysis and visualization of results using R. Attendees of the tutorial will have the opportunity to run the scripts of an analysis of gaze data collected during categorization of different emotional expressions while viewing faces. The tutorial covers basic eye movement analytics, e.g., fixation count and dwell time within AOIs, as well as advanced analysis using gaze transition entropy. Newer analytical tools and techniques such as microsaccade detection and the Index of Pupillary Activity will be covered with time permitting.
Scope: The scope of the tutorial extends from the basics of experimental design, including examples drawn from three different eye tracking studies, to preparation for the user of the Gaze Anlaytics Pipeline, how to run data through the pipeline, considering traditional and advance gaze analytics. Example source code and scripts are included so that audience members can work hands-on with examples.
Audience: The tutorial welcomes attendees at all levels of experience and expertise, from those just beginning to study eye movements and interested in the basics of experimental design to those well practiced in the profession who might wish to consider adopting use of Python and R scripts, possibly wishing to contribute to, expand on, and improve the pipeline.
Teachers: Dr. Duchowski is a professor of Computer Science at Clemson University. He received his doctorate (1997) from Texas A&M University, College Station, TX. His interests include visual attention and perception, eye tracking, computer vision, and computer graphics. He is a noted research leader in the field of eye tracking, having produced a corpus of papers and a monograph related to eye tracking research, and has delivered courses and seminars on the subject at international conferences. He maintains Clemson's eye tracking laboratory, and teaches a regular course on eye tracking methodology attracting students from a variety of disciplines across campus. Nina Gehrer is a clinical psychologist and postdoc at the University of Tübingen, Germany. She received her master's degree in 2015 and PhD in 2020. Her main research interest lies in studying face and emotion processing using eye tracking and a preferably wide range of analytic methods and she is particularly interested in possible alterations related to psychological disorders. She began working with Dr. Duchowski in 2016. Since then, they have enhanced and implemented his gaze analytics pipeline in the analysis of several eye tracking studies. Recently, they have started to extend their research to gaze patterns during social interactions.
Email: nina.gehrer@uni-tuebingen.de
Block: Machine Learning
Length: Half Day
Abstract: Recently deep learning has become a hype word in computer science. Many problems, which till now could be solved only using sophisticated algorithms, can be now solved with specially developed neural networks. Deep learning also becomes more and more popular in the eye tracking world. It may be used in any place where some kind of classification, clustering or regression is needed. The tutorial aims to show the potential applications (like calibration, event detection, gaze data analysis and so on), and – what is more important – to show how to apply deep learning frameworks in such research. There is a common belief that to use neural networks a strong mathematical background is necessary as there is much theory which must be understood before starting working. There is also a belief that, because most deep learning frameworks are just libraries in programming languages, it is necessary to be a programmer and have knowledge of the programming language that is used. While both abilities are beneficial, because they may help in achieving better results, this tutorial aims to prove that deep networks may be used even by people who know only a little about the theory. I will show you ready-to-use networks written using the Keras library in Python with exemplary eye movement datasets and try to explain the most critical issues which you will have to solve when preparing your own experiments. After the tutorial, you will probably not become an expert in deep learning, but you will know how to use it in practice with your eye movement data.
Scope:
The tutorial is divided into four parts:
(1) Introduction to Machine Learning (problems, algorithms, measures, examples)
(2) Neural Networks (architectures, implementations, Keras/Tensorflow, examples)
(3) Convolutional Neural Networks (idea, advantages, examples)
(4) Recurrent Neural Networks (sequences, architectures, examples)
All examples are given in Python 3 with the usage of scikit-learn, opencv, tensorflow and keras libraries.
Audience: The tutorial is addressed to every person interested in deep learning; no special skills are required apart from some knowledge about eye tracking and eye movement analysis. However, minimal programming skills are welcome and may help in better understanding the problem.
Teachers: Pawel Kasprowski is an Associate Professor at Silesian University of Technology, Poland. He received his PhD in Computer Science in 2004 under the supervision of Prof. Jozef Ober – one of the precursors of eye tracking. In 2019 he obtained habilitation (DSc) for his contribution in analysis and applications of eye movement signal. He has experience in both eye tracking and data mining. His primary research interest includes using data mining methods to analyze eye movement signal. Pawel Kasprowski teaches data mining at the University as well as during commercial courses. In the same time, he is an author of numerous publications concerning eye movement analysis.
Email: kasprowski@polsl.pl
Block: Machine Learning
Length: Half Day
Abstract: Even after tremendous advancement in the field of deep learning approaches in multiple domains, it remains mostly black box when it comes to understanding the reasoning driving it. Researchers have tried to explain the results of a particular machine learning model by unveiling the black box. There have been some advances in explaining these black-box models in domains such as image classification, natural language understanding, sentiment analysis. However, explanations provided in these domains are still not systematic, bias-free and accessible, which results in a lack of transparency of the results generated from these complex machine learning models. Researchers from various domains including eye-tracking have been relying for long on complex machine learning black box models and understanding these models are still considered not interesting enough for researchers' attention. In this tutorial, we try to identify open challenges while interpreting machine learning models used for multiple datasets including eye-movement data. We will find out why it’s difficult to interpret the results keeping in mind the nature of eye movement data which is used to train the model. We will also explore future research directions for explaining the black box within machine learning models considering eye movement data. In this tutorial, we will not be focusing too much on the results but on the model interpretability. We will give a hand-on-session with the already trained model on both image and eye-tracking datasets. This tutorial aims at providing an overview of how can we make our machine learning models more transparent and the results more trustworthy in case of high-stakes decisions.
Scope: In this tutorial, we will use python for exploratory data analysis and data augmentation, whereas PyTorch for deploying deep learning models for interpretability. We will start with Exploratory Data Analysis of multiple eye movement datasets, and then move to Eye movement data augmentation for Interpretable Machine Learning / Deep Learning, which will further take us to Interpretable Classification using different models (LSTM, Auto Encoders, Convolutional Neural Networks, RNNs, GRU), and finally look into Interpretable Machine Learning Approaches with Eye Movement Data. We will end this tutorial after Discussing future work in this field that can be done to develop reliable models for our various tasks.
Audience: Researchers involved in eye-tracking using machine learning / deep learning models could benefit from the tutorial most, regardless of their background. It would be interesting to discuss challenges researchers from different domains with current black-box machine learning models and how they can use interpretability to enhance their model and make it more transparent.
Teachers: Michael Burch is an assistant professor at the Technical University of Eindhoven. His main research interests are in information visualization, visual analytics, data science, and eye tracking. He organized and co-chaired the workshops on eye tracking and visualization (ETVIS) in the past four editions. Moreover, he was paper co-chair of VisSoft 2019 and VINCI 2019. Ayush Kumar is a senior Ph.D. Candidate from the Computer Science Department, Stony Brook University in Visual Analytics and Imaging Lab under the supervision of Prof. Klaus Mueller since 2014. He was a visiting researcher in the Visualization Research Center, University of Stuttgart under the supervision of Prof Daniel Weiskopf. He worked as a researcher in Brookhaven National Lab with Dr. Wei Xu and at AT&T Labs during Summer 2019. His main research interests are Data and Visual Analytics, Data Science, Machine Learning, and Eye Tracking. Prantik Howlader is a Ph.D. candidate from Computer Science Department, Stony Brook University in Computer Vision Lab under the supervision of Prof. Dimitris Samaras since 2018. He has worked as Senior Software Developer in Cisco for 2 years and prior to that has almost two years experience in Wipro Technologies as Project Engineer. He also worked under Dr. Vineeth N. Balasubramanian of IIT Hyderabad in explainable AI.
Email: aykumar@cs.stonybrook.edu
Block: Eyes & Brain
Length: Half Day
Abstract: Causes of dizziness include benign inner ear conditions as well as more severe and dangerous brain disorders such as stroke. In patients affected with acute dizziness and vertigo, eye movement examination outperforms neuroimaging for diagnosing stroke. For this reason, eye movements are routinely examined and recorded in neurology and neuro-otology clinics around the world. The use of eye movements is widespread in many fields of clinical research, including neurology, psychology and psychiatry. However, in many occasions, eye movement recordings have remained restricted as research tools and have not made their way to the clinic. Here, instead, we will focus on use cases where eye movements directly provide information that can be diagnostic, either on their own or in combination with a few other pieces of data such as patient’s history. In some cases, clinician’s examination is just qualitative and consists in looking directly at the patient’s eyes. In other cases, eye movements are recorded, typically with video infrared goggles, and become part of the clinical record of the patient. This tutorial will describe the tests that are regularly performed in the clinic to aid physicians in the diagnosis of patients with dizziness, oscillopsia, or double vision. These tests include the head impulse test, the test of skew, Dix-Hallpike maneuver, interpretation of nystagmus, etc. To properly understand the rationale of these tests we will first cover the basic concepts of eye movement control together with the relevant anatomy and physiology.
Scope: The tutorial will review behavioral properties and the neural substrate for the fundamental types of eye movements: vestibular-optokinetic reflex, saccades, smooth pursuit, gaze holding and fixation, and vergence. Then, it will review the clinical tests most used in neurology, otolaryngology, and ophthalmology clinics to diagnose patients affected with dizziness, oscillopsia, and double vision. Interpretation of these tests allows the clinicians to identify the disease and, in some cases, localize the lesion with more accuracy than neuroimaging.
Audience: The tutorial audience will be anybody interested on basic properties of eye movements and the neural circuits responsible for their control as well as researchers interested in clinical applications of eye movement recordings.
Teachers: Jorge Otero-Millan is a postdoctoral fellow in the Department of Neurology at Johns Hopkins University currently working in the laboratory of David S. Zee, author of the book “The Neurology of Eye Movements”. In July 2020 Jorge will join the faculty of the University of California at Berkeley as an Assistant Professor in the School of Optometry. With a background in engineering, during his PhD and Postdoctoral training, Jorge has collaborated with Neurologists, Ophthalmologists, and Otolaryngologists analyzing eye movements of patients suffering from disorders affecting the brain, the eyes, or the inner ear.
Email: jorge.oteromillan@gmail.com
Block: Eyes & Brain
Length: Full Day
Abstract: The combination of eye-tracking with simultaneous EEG recordings is a promising approach to study visual cognition under naturalistic viewing situations. This tutorial will introduce researchers to this relatively new technique and its advantages, with a focus on data analysis and modeling. It will cover the following topics: Building a suitable laboratory setup, data integration, strategies for removing eye movement artifacts from the EEG, and properties of saccade- and fixation-related brain potentials (SRPs/FRPs). A focus will be on the use of advanced regression techniques (deconvolution modeling with nonlinear predictors) to model the SRPs/FRPs and to statistically control for overlapping potentials and visuomotor nuisance variables during natural vision. In hands-on exercises, we will analyze a combined dataset, using the EYE-EEG toolbox (http://www2.hu-berlin.de/eyetracking-eeg) and the new "unfold" toolbox (http://www.unfoldtoolbox.org). Users should bring their own laptop with a recent version of MATLAB (including MATLAB’s statistics toolbox) installed.

Some relevant literature:
- Dimigen & Ehinger (2019). Analyzing combined eye-tracking/EEG experiments with (non)linear deconvolution models. bioRxiv. https://doi.org/10.1101/735530
- Ehinger & Dimigen (2019). Unfold: An integrated toolbox for overlap correction, non-linear modeling, and regression-based EEG analysis. PeerJ. e7838. https://doi.org/10.7717/peerj.7838
- Dimigen (2020). Optimizing the ICA-based removal of ocular EEG artifacts during from viewing experiments. NeuroImage. https://doi.org/10.1016/j.neuroimage.2019.116117
Scope: See "Tutorial abstract"
Audience: The workshop is intended for eye movement or EEG researchers interested in combining both methods for their research and learning about advantages and current limitations of this approach. The tutorial will include a mix of frontal lectures and hands-on exercises that use MATLAB-based open toolboxes (EYE-EEG and unfold). Some prior experience with MATLAB is helpful (but not necessary). Prior experience with either eye-tracking or EEG is recommended.
Teachers: Olaf is working as a visiting professor at Berlin's Humboldt-University (Germany) studying trans-saccadic visual perception and cognition, e.g. in reading, face perception, and more recently also scene viewing. Homepage: http://olaf.dimigen.de. Benedikt is a postdoctoral researcher at the Donders Institute (Netherlands), interested in various topics including visual perception, predictions, and statistics. Homepage: https://www.benediktehinger.de.
Email: olaf.dimigen@hu-berlin.de

Tutorials Schedule

Time Room 1
(Methodology & Statistics)
Room 2
(Machine Learning)
Room 3
(Eyes & Brain)
Room 4
(Eyes & Brain)
8:00
-
10:00

T1

T3

T5

T6
10:30
-
12:30

T1

T3

T5

T6
13:30
-
15:30

T2

T4

T6
16:00
-
18:00

T2

T4

T6