Oculus Research to present focal surface display discovery at SIGGRAPH
Oculus Research will present a paper on focal surface displays at the annual SIGGRAPH Conference in Los Angeles July 30 – August 3. This groundbreaking work demonstrates the potential ability to improve image clarity and depth of focus in VR for a better, more natural visual experience. Click here to read the paper.
It’s not every day that we’re able to peel back the curtain of Oculus Research. Today we’re excited to share the team’s groundbreaking research on focal surface displays, which has potentially far-reaching implications for improved visual fidelity in VR. Oculus Research Scientists Nathan Matsuda, Alexander Fix, and Douglas Lanman recently had their work accepted by SIGGRAPH, and we couldn’t wait to share a preview.
Check out a behind-the-scenes look at focal surface displays, an innovation from Oculus Research that could dramatically improve visual clarity and depth of focus in VR:
Focal surface displays mimic the way our eyes naturally focus at objects of varying depths. Rather than trying to add more and more focus areas to get the same degree of depth, this new approach changes the way light enters the display using spatial light modulators (SLMs) to bend the headset’s focus around 3D objects—increasing depth and maximizing the amount of space represented simultaneously.
All of this adds up to improved image sharpness and a more natural viewing experience in VR.
“Quite frankly, one of the reasons this project ran as long as it did is that we did a bunch of things wrong the first time around,” jokes Fix. “Manipulating focus isn’t quite the same as modulating intensity or other more usual tasks in computational displays, and it took us a while to get to the correct mathematical formulation that finally brought everything together. Our overall motivation was to do things the ‘right’ way—solid engineering combined with the math and algorithms to back it up. We weren’t going to be happy with something that only worked on paper or a hacked together prototype that didn’t have any rigorous explanation of why it worked.”
By combining leading hardware engineering, scientific and medical imaging, computer vision research, and state-of-the-art algorithms to focus on next-generation VR, this project takes a highly interdisciplinary approach—one that, to the best of our knowledge, has never been tried before. It may even let people who wear corrective lenses comfortably use VR without their glasses.
“It’s very exciting to work in a field with so much potential but where many interesting challenges still haven’t been solved,” notes Matsuda, a graduate student in the Northwestern McCormick School of Engineering. “To do so as a student on a small team with highly experienced researchers has been an amazing learning process.”
While we’re a long way out from seeing results in a finished consumer product, this emerging work opens up an exciting and valuable new direction for future research to explore. We’re committed to publishing research results that stand to benefit the VR/AR industry as a whole.
“It’s no secret that multiple academic and industrial teams are racing to move beyond fixed-focus headsets,” explains Lanman. “Vergence-accommodation conflict (VAC), eyeglasses prescriptions, and sharp viewing of near objects all motivate adjusting the focus of a VR display. As a researcher, I’m excited to share what our team has uncovered. That’s the joy of publishing—it opens the door to anyone building upon your efforts. As long as you’re thick-skinned enough, you should prepare to be surprised how much further your work can be carried by the worldwide academic community. The greatest challenge is getting other researchers excited enough to do that follow-on work, and I’m looking forward to attempting that at SIGGRAPH.”
Stay tuned to the blog next week for an in-depth profile of the team behind this exciting research.