All Research Areas
Research Areas
Year Published

26 Results

October 1, 2018

The effects of natural scene statistics on text readability in additive displays

Human Factors and Ergonomics Society

The minimum contrast needed for optimal text readability with additive displays (e.g. AR devices) will depend on the spatial structure of the background and text. Natural scenes and text follow similar spectral patterns. Therefore, natural scenes can mask low contrast text – making it difficult to read. In a set of experiments, we determine the minimum viable contrast for readability on an additive display.

By: Daryn R. Blanc-Goldhammer, Kevin J. MacKenzie
Areas: AR/VR
June 18, 2018

Eye In-Painting with Exemplar Generative Adversarial Networks

Computer Vision and Pattern Recognition (CVPR)

This paper introduces a novel approach to in-painting where the identity of the object to remove or change is preserved and accounted for at inference time: Exemplar GANs (ExGANs). ExGANs are a type of conditional GAN that utilize exemplar information to produce high-quality, personalized in-painting results.

By: Brian Dolhansky, Cristian Canton Ferrer
June 18, 2018

Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies

Computer Vision and Pattern Recognition (CVPR)

We present a unified deformation model for the markerless capture of human movement at multiple scales, including facial expressions, body motion, and hand gestures.

By: Hanbyul Joo, Tomas Simon, Yaser Sheikh
June 18, 2018

Audio to Body Dynamics

Computer Vision and Pattern Recognition (CVPR)

We present a method that gets as input an audio of violin or piano playing, and outputs a video of skeleton predictions which are further used to animate an avatar. The key idea is to create an animation of an avatar that moves their hands similarly to how a pianist or violinist would do, just from audio.

By: Eli Shlizerman, Lucio Dery, Hayden Schoen, Ira Kemelmacher Shlizerman
June 16, 2018

A common cause in the phenomenological and sensorimotor correlates of body ownership

International Multisensory Research Forum

The feeling that our limbs belong to our body is at the core of bodily self-consciousness. Over the years, limb ownership has been assessed through several types of measurements, including questionnaires and sensorimotor tasks assessing the perceived location of the hand with a visual-proprioceptive conflict.

By: Majed Samad, Cesare Parise, Sean Keller, Massimiliano Di Luca
Areas: AR/VR
June 13, 2018

A Comparative Study of Phoneme- and Word-Based Learning of English Words Presented to the Skin

Eurohaptics

Past research has demonstrated that speech communication on the skin is entirely achievable. However, there is still no definitive conclusion on the best training method that minimizes the time it takes for users to reach a prescribed performance level with a speech communication device. The present study reports the design and testing of two learning approaches with a system that translates English phonemes to haptic stimulation patterns (haptic symbols).

By: Yang Jiao, Frederico M. Severgnini, Juan S. Martinez, Jaehong Jung, Hong Z Tan, Charlotte M. Reed, E. Courtenay Wilson, Frances Lau, Ali Israr, Robert Turcott, Keith Klumb, Freddy Abnousi
June 13, 2018

Improving Perception Accuracy with Multi-sensory Haptic Cue Delivery

Eurohaptics

This paper presents a novel, wearable, and multi-sensory haptic feedback system intended to support the transmission of large sets of haptic cues that are accurately perceived by the human user. Previous devices have focused on the optimization of haptic cue transmission using a single modality and have typically employed arrays of haptic tactile actuators to maximize information throughput to a user.

By: Nathan Dunkelberger, Joshua Bradley, Jennifer L. Sullivan, Ali Israr, Frances Lau, Keith Klumb, Freddy Abnousi, Marcia K. O’Malley
June 13, 2018

Efficient Evaluation of Coding Strategies for Transcutaneous Language Communication

Eurohaptics 2018

Communication of natural language via the skin has seen renewed interest with the advent of mobile devices and wearable technology. Efficient evaluation of candidate haptic encoding algorithms remains a significant challenge. We present 4 algorithms along with our methods for evaluation, which are based on discriminability, learnability, and generalizability. Advantageously, mastery of an extensive vocabulary is not required.

By: Robert Turcott, Jennifer Chen, Pablo Castillo, Brian Knott, Wahyudinata Setiawan, Forrest Briggs, Keith Klumb, Freddy Abnousi, Prasad Chakka, Frances Lau, Ali Israr
June 1, 2018

The Immersive VR Self: Performance, Embodiment and Presence in Immersive Virtual Reality Environments

Book chapter from A Networked Self and Human Augmentics, AI, Sentience

Virtual avatars are a common way to present oneself in online social interactions. From cartoonish emoticons to hyper-realistic humanoids, these online representations help us portray a certain image to our respective audiences.

By: Raz Schwartz, William Steptoe
April 21, 2018

Speech Communication through the Skin: Design of Learning Protocols and Initial Findings

Computer Human Interaction (CHI)

This study reports the design and testing of learning protocols with a system that translates English phonemes to haptic stimulation patterns (haptic symbols).

By: Jaehong Jung, Yang Jiao, Frederico M. Severgnini, Hong Z Tan, Charlotte M. Reed, Ali Israr, Frances Lau, Freddy Abnousi