Facebook OpenEDS Workshop 2021

Eye Tracking for AR/VR: Sensors and Applications

An immersive (V)irtual/(A)ugmented Reality experience requires high-resolution display capability and contextual knowledge of user’s intention to customize the content and display elements. Both these tasks critically depend on highly accurate eye-tracking solutions, ideally, ahead of time. Unfortunately, the current generation of eye-tracking algorithms neither meet the exacting accuracy/precision demands nor can they reliably predict the eye-movements. Therefore, we need to push the boundaries of the current just-in-time eye-tracking solutions in terms of a) accuracy and b) eye-movement prediction capabilities.

The goal for this workshop is to motivate the broader community of computer vision and machine learning scientists to actively participate in developing novel outlooks towards eye-tracking solutions to help reach the aforementioned goals. Partly, this workshop is also motivated from the tremendous success of OpenEDS 2019, hosted at ICCV’19, which witnessed enthusiastic participation from researchers in different disciplines such as computer vision, machine learning, eye-tracking and computer graphics. Continuing with the theme of academic engagement, this year, the workshop will focus on the emergence of novel sensor technology for eye-tracking and the wide-scale adoption of eye-tracking in applications ranging from AR, VR, gaming and health. This workshop will host two challenges structured around fusing 2D-3D information and eye-movement prediction research.

These challenges are also accompanied with a release of relevant datasets to support the advancement of research. More information about these challenges can be found on the OpenEDS 2021 Challenge Page. While the entries to these challenges will focus on addressing the specific area of research, we anticipate that the dataset released as part of the challenges will also serve as a benchmark dataset for future research in eye- tracking for VR and AR.

Below is the list of topics that are of particular interest for this workshop:

  • Eye-tracking in VR/AR applications
  • Eye-movement analysis for biometrics, security and privacy
  • Eye-tracking in medicine and health care
  • Direct gaze-based interaction
  • Gaze-detection techniques focusing on robustness to sensor-slippage and noise
  • Novel sensors for eye tracking
  • Eye-movement prediction for gaze-estimation latency compensation
  • Temporal-prediction of eye-movements
  • Uncertainty modeling of eye-movement prediction
  • 2D/3D information fusion for semantic labeling of point-clouds
  • Point-cloud segmentation

ICCV Workshop Date: Oct 17th 2021


09:00-09:15 Welcome
09:15-10:00 Gordon Wetzstein
10:00-10:15 Oral 1 – Gaze Prediction Challenge Winner
10:15-11:00 Sidney D’Mello
11:00-11:45 Short-Oral presentation of select accepted papers at Workshop
11:45-12:00 Oral 2 – 3D SS Challenge Winner
12:00-12:45 James Rehg
12:45-12:55 Announce Challenge Winners
12:55-1:00 Closing Remarks

Latest News

Stay tuned!


Submissions must be written in English and must be sent in PDF format. Please refer to the ICCV submission guidelines for instructions regarding formatting, templates, and policies. The submissions will be reviewed by the program committee and selected papers will be published in ICCV Workshop proceedings.
Submit your paper using the Openreview submission portal before the July 23rd 2021.


  • Paper submission deadline: July 23, 2021
  • Paper acceptance notification deadline: August 6, 2021
  • Camera Ready deadline: August 17, 2021
  • Publication Venue: (a) ICCV workshop proceedings (b) IEEE Xplore (c) CVF Open Access

Keynote and Speakers

  • Gordon Wetzstein: Gordon Wetzstein is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics and vision, computational optics, and applied vision science, Prof. Wetzstein’s research has a wide range of applications in next-generation imaging, display, wearable computing, and microscopy systems. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist at MIT, he received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. He is the recipient of an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers (PECASE), an SPIE Early Career Achievement Award, a Terman Fellowship, an Okawa Research Grant, the Electronic Imaging Scientist of the Year 2017 Award, an Alain Fournier Ph.D. Dissertation Award, and a Laval Virtual Award as well as Best Paper and Demo Awards at ICCP 2011, 2014, and 2016 and at ICIP 2016.
  • Sidney D’Mello: Sidney D’Mello is an Associate Professor at the Institute of Cognitive Science and the Department of Computer Science at the University of Colorado Boulder (since July 1, 2017). He was previously an Associate Professor in the departments of Psychology and Computer Science at the University of Notre Dame. D’Mello leads the NSF National Institute for Student-Agent Teaming (2020-2025), which aims to develop AI technologies to facilitate rich socio-collaborative learning experiences for all students. D’Mello’s research is at the intersection of the cognitive, affective, computing, and learning sciences. Specific interests include affective computing, social signal processing, intelligent learning environments, speech and language processing, human-computer interaction, and multimodal machine learning. His team is interested in the dynamic interplay between cognition and emotion while individuals and groups engage in complex real-world tasks. We apply insights gleaned from this basic research program to develop intelligent technologies that help people achieve to their fullest potential by coordinating what they think and feel with what they know and do.
  • James Rehg: James M. Rehg (pronounced “ray”) is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he is Director of the Center for Behavioral Imaging, co-Director of the Center for Computational Health, and co-Director of the Computational Perception Lab. He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005, BMVC 2010, Mobihealth 2014, Face and Gesture 2015, and a Method of the Year award from the journal Nature Methods. Dr. Rehg serves on the Editorial Board of the Intl. J. of Computer Vision, and he served as the General co-Chair for CVPR 2009 and is serving as the Program co-Chair for CVPR 2017 (Puerto Rico). He has authored more than 100 peer-reviewed scientific papers and holds 23 issued US patents. Dr. Rehg’s research interests include computer vision, machine learning, behavioral imaging, and mobile health (mHealth). He is the Deputy Director of the NIH Center of Excellence on Mobile Sensor Data-to-Knowledge (MD2K), which is developing novel on-body sensing and predictive analytics for improving health outcomes. Dr. Rehg is also leading a multi-institution effort, funded by an NSF Expedition award, to develop the science and technology of Behavioral Imaging— the capture and analysis of social and communicative behavior using multi-modal sensing, to support the study and treatment of developmental disorders such as autism.


  • Karsten Behrendt: Karsten has a background in computer vision and machine learning, and works on optimizing eye tracking approaches for AR/VR products. His current focus is on the impact of latency on various applications and their mitigation.
  • Robert Cavin: Robert leads Eye Tracking Research at FRL, he received a Masters Degree in Computer and Electrical Engineering from the University of Florida in 2001, with a focus on Robotics, Machine Learning, and Computer Architecture.
  • Qing Chao: Qing is a Research Scientist at FRL and has a PhD in Electrical Engineering focusing on optical imaging and 3D sensing for future AR/VR. She invents, demonstrates, designs, experiments and simulates novel optical imaging sensors and eye tracking methods using 3D computer vision algorithms, physical, Fourier and geometrical optics.
  • Kara Emery: Kara is a doctoral candidate in the Integrative Neuroscience Program at the University of Nevada, Reno. She is interested in the computational strategies that underlie visual processing and learning.
  • Alexander Fix: Alexander is a Research Scientist at FRL, he received his PhD from Cornell University in 2016, with a focus on optimization algorithms for CV and discrete labeling problems. Current research interests are in 3D reconstruction and neural rendering, particularly focusing on eyes and faces.
  • Tarek Hefny: Tarek is a Tech Lead Manager on Eye Tracking in FRL, and works on building data collection protocols, supporting the infra structure for the team and building processing and annotation tools. Tarek has a bachelor of Computer Science from the American University in Cairo.
  • Cristina Palmero: Cristina received her bachelor’s degree from the Polytechnic University of Catalonia (Spain) in 2011 and received Best Bachelor Thesis Award for accessibility and labor integration of people with disabilities. She is currently a PhD student at Universitat de Barcelona, Spain, focusing on automatic gaze estimation in human-human, human-computer interaction, and AR/VR scenarios.
  • Abhishek Sharma: Abhishek is a Research Scientist at FRL, he received his BS degree in electrical engineering from Indian Institute of Technology Roorkee in 2010, and the Ph.D. degree in computer science from University of Maryland College Park in 2015. His research interest includes visual automation, Biometrics and machine learning.
  • Yiru Shen: Yiru is a Research Scientist at FR, she received her PhD from Clemson University, 2018, with specialization on machine learning on gesture recognition, 3D point-cloud understanding. Her current research includes domain adaptation, machine learning based 3D shape generation / reconstruction
  • Sachin Talathi: Sachin is Research Manager at FRL, he received his B-Tech in Engineering Physics from Indian Institute of Technology, Bombay in 2001 and PhD in Physics from University of California San Diego in 2006. His research interests include Computational Neuroscience, Neural Signal Processing and Machine Learning.

Program Committee

  • Robert Cavin
  • Alexander Fix
  • Oleg Komogortsev is a Visiting Professor at Facebook Reality Lab and a Professor at Texas State University. Dr. Komogortsev conducts research in eye tracking with a focus on sensor design, cyber security (biometrics), human computer interaction, usability, bioengineering, and health assessment.
  • Abhishek Sharma
  • Yiru Shen
  • Sachin Talathi


Email: openeds2021@fb.com