About

In conjunction with ICCV 2019
Oct 27th- Nov 2nd, 2019
COEX Convention Center, Seoul, Korea

 

Virtual (VR) and Augmented Reality (AR) has garnered mainstream attention with products such as the Oculus Rift and Oculus Go. However, these products have yet to find broad adoption by consumers. Mass market appeal for these products may require revolutions in comfort, utility, performance, and consideration of user awareness and privacy related to eye-tracking features. These revolutions can, in part, be enabled by measuring where an individual is looking, where his/her pupils are, and his/her eye expression – colloquially known as eye tracking. For example, foveated rendering greatly reduces the power required to render realistic scenes in a virtual environment.

The goal for this workshop is to engage the broader community of computer vision and machine learning scientists in a discussion surrounding the importance of eye-tracking solutions for VR and AR that work, for all individuals, under all environmental conditions.

This workshop will host two challenges that are structured around 2D eye-image datasets that we have collected using a prototype VR head mounted device. More information about these challenges is located here.  Entries to these challenges will address some outstanding questions relevant to the application of eye-tracking for VR and AR platforms. We anticipate that the dataset released as part of the challenges will also serve as a benchmark dataset for future research in eye- tracking for VR and AR.

Below is the list of topics that are of particular interest for this workshop:

  • Semi-supervised semantic segmentation of eye regions
  • Photorealistic reconstruction and rendering of eye images
  • Generative models for eye image synthesis and gaze estimation
  • Transfer learning for eye tracking from simulation data to real data
  • Eye feature encoding for user calibration
  • Temporal models for gaze estimation
  • Image-based gaze classification
  • Headset slippage correction, eye-relief estimation
  • Realistic avatar gazes

Latest News

Will update as the challenges progress.

Submissions

Submissions must be written in English and must be sent in PDF format. Each submitted paper must be no longer than four (4) pages, excluding references. Please refer to the ICCV submission guidelines for instructions regarding formatting, templates, and policies. The submissions will be reviewed by the program committee and selected papers will be published in ICCV Workshop proceedings.

Submit your paper using this link before the August 31st deadline.

Timeline

  • Call for workshop participation and dataset release: May 03, 2019
  • Workshop paper submission deadline: August 31, 2019
  • Notifications to authors: September 15, 2019
  • Camera ready deadline: September 30, 2019
  • Workshop: November 2, 2019

Keynote Speakers

Oleg Komogortsev
Texas State University

Andreas Bulling
Max Planck Institute for Informatics

Ramesh Raskar
MIT Media Lab

Satya Mallick
Interim-CEO, OpenCV.org
CEO, Founder, Big Vision LLC

Schedule

The schedule will be posted once it is finalized.

Organizers

Robert Cavin
Facebook Reality Labs

Jixu Chen
Facebook

IIke Demir
DeepScale

Stephan Garbin
University College London

Oleg Komogortsev
Visiting Scientist, Facebook Reality Labs

Immo Schuetz
Postdoctoral Research Scientist, Facebook Reality Labs

Abhishek Sharma
Facebook Reality Labs

Yiru Shen
Facebook Reality Labs

Sachin S. Talathi
Facebook Reality Labs

Program Committee

  • Kaan Aksit, NVIDIA
  • Rob Cavin, Facebook Reality Labs
  • Jixu Chen, Facebook Reality Labs
  • Ilke Demir, DeepScale
  • David Dunn, University of North Carolina
  • Oleg Komogortsev, Texas State University
  • Immo Schuetz, Facebook Reality Labs
  • Sachin Talathi, Facebook Reality Labs
  • Lei Xiao, Facebook Reality Labs
  • Marina Zannoli, Facebook Reality Labs

Contact

Email: openedschallenge@fb.com