Research Area
Year Published

105 Results

July 8, 2019

Tactile Echoes: A Wearable System for Tactile Augmentation of Objects

IEEE World Haptics Conference

We present Tactile Echoes, a wearable system for augmenting tactile interactions with any object. This system senses vibrations in the fingertip that are produced by interactions of the finger with a touched object. It processes the vibration signals in real-time via a parametric signal network and returns them to the finger as “Tactile Echoes” of the touch interaction.

By: Anzu Kawazoe, Massimiliano Di Luca, Yon Visell

July 8, 2019

The frequency of tactile adaptation systematically biases subsequent frequency identification

IEEE World Haptics Conference

Exposure to a particular sensory stimulation for a prolonged period of time often results in changes in the associated perception of subsequent stimulation. Such changes can take the form of decreases in sensitivity and/or aftereffects. Aftereffects often result in a rebound in the perception of the associated stimulus property when presented with a novel stimulus. The current study sought to determine if such perceptual aftereffects could be experienced following tactile stimulation at a particular frequency.

By: John de Grosbois, Raymond King, Massimiliano Di Luca, Cesare Parise, Rachel Bazen, Mounia Ziat
Areas: AR/VR

July 8, 2019

Exogenous cueing of visual attention using small, directional, tactile cues applied to the fingertip

IEEE World Haptics Conference

The deployment of visual spatial attention can be significantly influenced in an exogenous, presumably bottom-up manner. Traditionally, spatial cueing paradigms have been utilized to come to such conclusions. Although these paradigms have primarily made use of visual cues, spatially correspondent tactile cues have also been successfully employed.

By: John de Grosbois, Massimiliano Di Luca, Raymond King, Cesare Parise, Mounia Ziat
Areas: AR/VR

July 7, 2019

Affective touch communication in close adult relationships

IEEE World Haptics Conference

Inter-personal touch is a powerful aspect of social interaction that we expect to be particularly important for emotional communication. We studied the capacity of closely acquainted humans to signal the meaning of several word cues (e.g. gratitude, sadness) using touch sensation alone.

By: Sarah McIntyre, Athanasia Moungou, Rebecca Boehme, Peder M. Isager, Frances Lau, Ali Israr, Ellen A. Lumpkin, Freddy Abnousi, Håkan Olausson
Areas: AR/VR

July 7, 2019

Uncovering Human-to-Human Physical Interactions that Underlie Emotional and Affective Touch Communication

IEEE World Haptics Conference

Couples often communicate their emotions, e.g., love or sadness, through physical expressions of touch. Prior efforts have used visual observation to distinguish emotional touch communications by certain gestures tied to one’s hand contact, velocity and position. The work herein describes an automated approach to eliciting the essential features of these gestures.

By: Steven C. Hauser, Sarah McIntyre, Ali Israr, Håkan Olausson, Gregory J. Gerling
Areas: AR/VR

July 7, 2019

From Human-to-Human Touch to Peripheral Nerve Responses

IEEE World Haptics Conference

Human-to-human touch conveys rich, meaningful social and emotional sentiment. At present, however, we understand neither the physical attributes that underlie such touch, nor how the attributes evoke responses in unique types of peripheral afferents. Indeed, nearly all electrophysiological studies use well-controlled but non-ecological stimuli. Here, we develop motion tracking and algorithms to quantify physical attributes – indentation depth, shear velocity, contact area, and distance to the cutaneous sensory space (receptive field) of the afferent – underlying human-to-human touch.

By: Steven C. Hauser, Saad S. Nagi, Sarah McIntyre, Ali Israr, Håkan Olausson, Gregory J. Gerling
Areas: AR/VR

July 7, 2019

A Compact Skin-Shear Device using a Lead-Screw Mechanism

IEEE World Haptics Conference

We present a skin-shear actuator based on the lead screw mechanism. The lead screw mechanism is simple, reliable, offers fewer components, and accommodates into compact form-factors. We show mechanical design of a single assembly unit and implement multiple units in a single handheld device. We evaluate the actuator in one instrumentation-based test and one preliminary user study.

By: Pratheev Sreetharan, Ali Israr, Priyanshu Agarwal
Areas: AR/VR

June 27, 2019

Sensor Modeling and Benchmarking — A Platform for Sensor and Computer Vision Algorithm Co-Optimization

International Image Sensor Workshop

We predict that applications in AR/VR devices [1] and intelligence devices will lead to the emergence of a new class of image sensors — machine perception CIS (MPCIS). This new class of sensors will produce images and videos optimized primarily for machine vision applications, not human consumption.

By: Andrew Berkovich, Chiao Liu

June 16, 2019

Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking

Conference on Computer Vision and Pattern Recognition (CVPR)

Improvements in data-capture and face modeling techniques have enabled us to create high-fidelity realistic face models. However, driving these realistic face models requires special input data, e.g. 3D meshes and unwrapped textures. Also, these face models expect clean input data taken under controlled lab environments, which is very different from data collected in the wild. All these constraints make it challenging to use the high-fidelity models in tracking for commodity cameras. In this paper, we propose a self-supervised domain adaptation approach to enable the animation of high-fidelity face models from a commodity camera.

By: Jae Shin Yoon, Takaaki Shiratori, Shoou-I Yu, Hyun Soo Park

June 14, 2019

2.5D Visual Sound

Conference Computer Vision and Pattern Recognition (CVPR)

Binaural audio provides a listener with 3D sound sensation, allowing a rich perceptual experience of the scene. However, binaural recordings are scarcely available and require nontrivial expertise and equipment to obtain. We propose to convert common monaural audio into binaural audio by leveraging video.

By: Ruohan Gao, Kristen Grauman