Challenge Track Call for Papers
In 2019, ETRA will host for the first time a Challenge Track, which in this opening edition will consist of a mining challenge. We call upon everyone interested to apply their analytical tools to a common human eye-movement data set. The challenge is for participants to creatively engage their newest and most exciting mining tools and approaches to make the most of these data.
Accepted submissions will be published as part of the proceedings.
Download ETRA 2019 challenge dataset here.
How to participate in the challenge:
- Familiarize yourself with the dataset.
- Analyze the dataset with your mining tools.
- Report your findings in a 4-page challenge report.
- Submit your report on or before March 22, 2019.
Challenge Data
The dataset includes data from 960 trials of 45 seconds each, collected from 8 different subjects (6 females, 2 males) that participated in 3 experimental sessions, of ~60 minutes each. Subjects performed a variety of tasks including visual fixation, visual search and visual exploration. For full description of the experimental methods see Otero-Millan et al., Journal of Vision 2008 and McCamy et al. Journal of Neuroscience 2014.
Data Format
For each trial the dataset includes:
- Subject identifier.
- Task (Fixation or free viewing).
- Visual scene type (Blank, Natural, Picture puzzle, Where's Waldo).
- Visual scene image file.
- 45 s of binocular ere movement data.
- For Picture puzzle and where's Waldo trials the locations of the clicks where subjects indicated differences or Waldo characters.
Summary of data collection procedures
Eye position was acquired with a fast video-based eye movement monitor (EyeLink II, SR Research, Ontario, Canada) while subjects rested their head on a chin-rest, 57 cm from a linearized video monitor (Barco Reference Calibrator V, 75 Hz refresh rate 40x30cm, 1024x768px).
Each trial corresponds with one out of 8 different experimental conditions (4 fixation conditions and 4 free-viewing conditions). We presented 15 different visual scenes per condition (except for the blank scene). Each visual scene was one of the following: a. Blank scene, b. Natural scene, c. “Picture puzzle”, or d. “Where’s Waldo”. The scenes presented in conditions b and c were scanned from the LIFE Picture Puzzle books (Adams, 2006a, 2006b, 2006c). The scenes presented in condition d were scanned from the Where’s Waldo books (Handford, 2007a, 2007b, 2007c). All images were equalized for average luminance and RMS contrast (except for the blank scene, which was 50% gray). All images had the same size (36 deg (w) x 25.2 deg (h)) and were centered on the monitor screen. The visual scenes presented in the fixation and free-viewing conditions were identical, except for the presence/absence of the fixation cross. In the fixation conditions, the subject’s task (i.e., prolonged fixation) did not vary: only the visual scene changed. In the free-viewing conditions, the subject’s task varied according to the visual scene presented. Conditions a and b (blank scene and natural scene) required free visual exploration of the scene (i.e., the subject was instructed to explore the visual scene at will). Conditions c and d involved visual searches. In condition c (Picture puzzles), the subject was presented with two side-by-side near-identical visual scenes and had to find all the differences between them. In condition d (Where’s Waldo) the subject had to conduct the classic cartoon visual search task (i.e., the subject had to find Waldo and other relevant characters/objects from the Where’s Waldo books).
At the end of the Picture puzzle and Where’s Waldo trials, the subjects were asked to indicate, using the mouse, the screen locations corresponding to the detected objects/ differences. In the Picture puzzle condition, subjects were required to indicate the differences on the left image only.
Challenge Report
Challenge reports should describe the results of the work by providing an introduction to the problem addressed and why it is worth studying, the portion of the data set used, the approach and tools used, the results and their implications, and conclusions. The report should highlight the importance and significance of the work. We encourage the inclusion of replication instructions and open-sourcing tools that will facilitate reproduction of results.
Challenge reports must be at most 4 pages long and must conform at time of submission to the formatting instructions that can be found at: http://www.siggraph.org/learn/instructions-authors.
Submission
All submissions must be made electronically, at https://easychair.org/conferences/?conf=etra19. Please choose the challenge track during submission.
Challenge Track Important Dates | |
---|---|
March 22, 2019 | Paper deadline |
March 26, 2019 | Author Notification |
March 29, 2019 | Camera Ready |
Challenge Track Chairs
For more information please contact:
Susana Martinez (smart@neuralcorrelate.com), State University of New York, Downstate Medical Center
Jorge Otero-Millan (jotero@jhu.edu), Johns Hopkins University, USA
Accepted Papers
Task Classification Model for Visual Fixation, Exploration, and Search
Ayush Kumar (Stony Brook University), Anjul Tyagi (Stony Brook University), Michael Burch (TU Eidhoven), Daniel Weiskopf (University of Stuttgart), Klaus Mueller (Stony Brook University)
Encodji: Encoding Gaze Data Into Emoji Space for an amusing Scanpath Classification ;)
Wolfgang Fuhl (University Tübingen), Efe Bozkir (University Tübingen), Benedikt Hosp (University Tübingen), Nora Castner (University Tübingen), David Geisler (University Tübingen), Thiago Santini (University Tübingen), Enkelejda Kasneci (University Tübingen)
Towards a better description of visual exploration through temporal dynamic of ambient and focal modes
Alexandre Milisavljevic (Paris Descartes University), Thomas Le Bras (Paris Descartes University), Matei Mancas (University of Mons), Coralie Petermann (Sublime Skinz), Bernard Gosselin (University of Mons), Karine Doré-Mazars (Paris Descartes University)
Understanding the Relation between Microsaccades and Pupil Dilation
Sudeep Raj (Saint Mary's College of California), Chia-Chien Wu (Harvard Medical School/Brigham and Women's Hospital), Shreya Raj (University of Washington), Nada Attar (San Jose State University)
References
Otero-Millan J, Troncoso XG, Macknik SL, Serrano-Pedraza I, Martinez-Conde S (2008). Saccades and microsaccades during visual fixation, exploration, and search: foundations for a common saccadic generator. Journal of Vision, Dec 18;8(14):21.1-18. doi: 10.1167/8.14.21.
McCamy MB, Otero-Millan J, Di Stasi LL, Macknik SL, Martinez-Conde S (2014). Highly informative natural scene regions increase microsaccade production during visual scanning. Journal of Neuroscience, Feb 19;34(8):2956-66. doi: 10.1523/JNEUROSCI.4448-13.2014.