Binaural synthesis of spatial audio for virtual and augmented environments is typically performed by convolving an anechoic signal with impulse responses measured at the listener’s ears using a source at a fixed distance. This method is accurate for sounds presented at the measured impulse response’s distance, and for more distant sources, decreasing intensity provides a reasonable approximation. However, sources nearer than 1 m provide additional distance cues. The head-shadow effect becomes markedly stronger and exponential sound intensity falloff comes into play, exaggerating inter-aural level differences. Also, the angle to each ear diverges, changing the directionally dependent spectral filtering of the pinnae. Changes in inter-aural level differences can be approximated across listeners, but changes in pinna cues with changes in distance are highly individual, and thus less easily approximated. Reproducing some or all of these cues for each individual may be necessary to create a convincing percept of very near objects in virtual and augmented reality environments. The present work aims to determine the importance of each cue for static and dynamic sources. Listening tests have been conducted in a virtual environment using generic and individual head related impulse responses, with and without near-field compensation.