In conjunction with ICCV 2019
Oct 27th – Nov 2nd, 2019
COEX Convention Center, Seoul, Korea


Virtual (VR) and Augmented Reality (AR) has garnered mainstream attention with products such as the Oculus Rift and Oculus Go. However, these products have yet to find broad adoption by consumers. Mass market appeal for these products may require revolutions in comfort, utility, performance, and consideration of user awareness and privacy related to eye-tracking features. These revolutions can, in part, be enabled by measuring where an individual is looking, where his/her pupils are, and his/her eye expression – colloquially known as eye tracking. For example, foveated rendering greatly reduces the power required to render realistic scenes in a virtual environment.

The goal for this workshop is to engage the broader community of computer vision and machine learning scientists in a discussion surrounding the importance of eye-tracking solutions for VR and AR that work, for all individuals, under all environmental conditions.

This workshop will host two challenges that are structured around 2D eye-image datasets that we have collected using a prototype VR head mounted device. More information about these challenges is located here.  Entries to these challenges will address some outstanding questions relevant to the application of eye-tracking for VR and AR platforms. We anticipate that the dataset released as part of the challenges will also serve as a benchmark dataset for future research in eye- tracking for VR and AR.

Below is the list of topics that are of particular interest for this workshop:

  • Semi-supervised semantic segmentation of eye regions
  • Photorealistic reconstruction and rendering of eye images
  • Generative models for eye image synthesis and gaze estimation
  • Transfer learning for eye tracking from simulation data to real data
  • Eye feature encoding for user calibration
  • Temporal models for gaze estimation
  • Image-based gaze classification
  • Headset slippage correction, eye-relief estimation
  • Realistic avatar gazes

Latest News

Will update as the challenges progress.


Submissions must be written in English and must be sent in PDF format. Each submitted paper must be no longer than four (4) pages, excluding references. Please refer to the ICCV submission guidelines for instructions regarding formatting, templates, and policies. The submissions will be reviewed by the program committee and selected papers will be published in ICCV Workshop proceedings.

Submit your paper using this link before the August 31st deadline.



  • Paper submission deadline:  August 19th 2019
  • Paper acceptance notification deadline: Aug 25th 2019
  • Camera Ready deadline: Aug 30th 2019
  • Publication Venue: (a) ICCV workshop proceedings (b) IEEE Xplore (c) CVF Open Access


  • Paper submission deadline:  August 31st  2019
  • Paper acceptance notification deadline: Sept 15th 2019
  • Camera Ready deadline: Sept 27th 2019
  • Publication Venue: (a) IEEE Xplore (b) CVF Open Access

Keynote Speakers

Oleg Komogortsev
Texas State University

Andreas Bulling
Max Planck Institute for Informatics

Ramesh Raskar
MIT Media Lab

Satya Mallick
Interim-CEO, OpenCV.org
CEO, Founder, Big Vision LLC


The schedule will be posted once it is finalized.


Robert Cavin
Facebook Reality Labs

Jixu Chen

IIke Demir

Stephan Garbin
University College London

Oleg Komogortsev
Visiting Scientist, Facebook Reality Labs

Immo Schuetz
Postdoctoral Research Scientist, Facebook Reality Labs

Abhishek Sharma
Facebook Reality Labs

Yiru Shen
Facebook Reality Labs

Sachin S. Talathi
Facebook Reality Labs

Program Committee

  • Kaan Aksit, NVIDIA
  • Rob Cavin, Facebook Reality Labs
  • Jixu Chen, Facebook Reality Labs
  • Ilke Demir, DeepScale
  • David Dunn, University of North Carolina
  • Oleg Komogortsev, Texas State University
  • Immo Schuetz, Facebook Reality Labs
  • Sachin Talathi, Facebook Reality Labs
  • Lei Xiao, Facebook Reality Labs
  • Marina Zannoli, Facebook Reality Labs


Email: openedschallenge@fb.com