A basic building block of audio for Augmented Reality (AR) is the use of virtual sound sources layered on top of real sources present in an environment. In order to perceive these virtual sources as belonging to the natural scene it is important to carefully replicate the room acoustics of the listening space. However, it is unclear to what extent the real and virtual room impulse responses (RIR) need to be matched in order to generate plausible scenes in which virtual sound sources blend seamlessly with real sound sources. This contribution presents an auralization framework that allows binaural rendering, manipulation and reproduction of room acoustics in augmented reality scenarios, in order to get a better understanding of the perceptual relevance of individual room acoustic parameters. Auralizations are generated from measured multichannel room impulse responses (MRIR) parametrized using the Spatial Decomposition Method (SDM). An alternative method to correct the known time-dependent coloration of SDM based auralizations is presented. Instrumental validation shows that re-synthesized binaural room impulse responses (BRIRs) are in close agreement with measured BRIRs. In situ perceptual validation with expert listeners show that – in the presence of visual cues, an explicit sound reference and unlimited listening time, they are able to discriminate between a real loudspeaker and its re-synthesized version. However, the renderings appear to be as plausible as a real source once visual cues are removed. Finally, approaches to manipulate the spatial and time-energy properties of the auralizations are presented.