Here we explore whether a sparse representation of human touch is sufficient to convey social touch signals. To test this we collected a dataset of social touch interactions using a soft wearable pressure sensor array, developed an algorithm to map recorded data to an array of actuators, then applied our algorithm to create signals that drive an array of normal indentation actuators placed on the arm.
This work seeks to rectify that by quantifying the relationship between curing kinetics, tested using differential scanning calorimetry, and peel testing of a printable Ecoflex 00-30 based silicone formulation.
To inform dynamic approaches to setting such a tradeoff, two user studies were conducted on how relative preference for false positive versus false negative errors is influenced by differences in the temporal cost of error recovery, and high-level task factors (time pressure, multi-tasking).
In this paper, we investigate if results of such studies translate into a real application and game-like experience. We designed a virtual escape room in which participants interact with various objects to gather clues and complete puzzles.
This paper presents three amelioration strategies to handle these errors and demonstrates experimentally that all three are effective in reducing their impact. This setting is also used to explore general issues around study design for motion perception.
We ran a user study with the salient haptics cues to determine how well people were able to identify them without training on the dorsal side of the wrist, if they could interpret them better with training, and if that knowledge could be transferred to a secondary, untrained location (volar side of the wrist).
Visual search is a routine human behavior and canonical example of selectively sampling sensory information in service of attaining a goal. In an effort to extend insights from existing models of optimal visual search to naturalistic environments, we conducted a study of visual search in virtual reality.
This work explores the design of marking menus for gaze-based AR/VR menu selection by expert and novice users. It first identifies and explains the challenges inherent in ocular motor control and current eye tracking hardware, including overshooting, incorrect selections, and false activations.
Results demonstrated that targets appearing closer to the skin, located around the wrist, or placed on the medial side of the forearm could be selected more quickly than targets farther away from the skin, located around the elbow, or on the lateral side of the forearm. Based on these results, we developed Armstrong guidelines, demonstrated through a Unity plugin to enable designers to create performance-optimized arm-anchored 3D UI layouts.