The dataset is designed to contain minimal biases and has detailed annotations for the different types of reasoning over the spatio-temporal space of video. Dialogues are synthesized over multiple question turns, each of which is injected with a set of cross-turn semantic relationships. We use DVD to analyze existing approaches, providing interesting insights into their abilities and limitations.
This paper outlines a new method to adapt to desired and undesired signals using their spatial statistics, independent of their temporal characteristics. The method uses a linearly constrained minimum variance (LCMV) beamformer to estimate the relative source contribution of each source in a mixture, which is then used to weight statistical estimates of the spatial characteristics of each source used for final separation.
In this paper, we propose a hand-object spatial representation that can achieve generalization from limited data. Our representation combines the global object shape as voxel occupancies with local geometric details as samples of closest distances. This representation is used by a neural network to regress finger motions from input trajectories of wrists and objects.
We present Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, e.g., point-based or mesh-based methods. Our approach achieves this by leveraging spatially shared computation with a convolutional architecture and by minimizing computation in empty regions of space with volumetric primitives that can move to cover only occupied regions.
The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.
In this paper, we develop a learning framework that generates control policies for physically simulated athletes who have many degrees-of-freedom. Our framework uses a two step-approach, learning basic skills and learning boutlevel strategies, with deep reinforcement learning, which is inspired by the way that people how to learn competitive sports.
In the 2D study, each video was of resolution 7680×3840 and was viewed and quality-rated by 36 subjects, while in the 3D study, each video was of resolution 5376×5376 and rated by 34 subjects. Both studies were conducted on top of a foveated video player having low motion-to-photon latency (∼50ms).
We ran a user study with the salient haptics cues to determine how well people were able to identify them without training on the dorsal side of the wrist, if they could interpret them better with training, and if that knowledge could be transferred to a secondary, untrained location (volar side of the wrist).
Our method reconstructs meso-and microscopic surface features on the fly along a contact trajectory, and runs a micro-contact dynamics simulation whose outputs drive vibrotactile haptic actuators and modal sound synthesis. An exploratory, absolute identification user study was conducted as an initial evaluation of our synthesis methods.