ReverbAudioSource - when and where

taking note of this suggestion from @jules about a reverbAudioSource per channel

where is the best place to create each instance of the reverbAudioSource for each channel?

Also, in the reverb parameters, size is a scale from 0-1, with one being a big room - size in meters is important to my application, does anyone have an idea of what dimesion (in seconds or meters) ‘big’ is?

The Juce Reverb and ReverbAudioSource already work internally as stereo, so you don’t need multiple instances for stereo. Jules’s answer is actually a bit confusing in that context. Most of the other Juce audio DSP classes do work only as mono, though, and you will need an object instance per channel. (Or with the newer DSP classes, the ProcessorDuplicator.)

I’m going to confuse matters further, :smile: , because I actually want them on a per channel basis because I require the size to be variable on a per channel basis.

I’d probably prefer them to be mono reverbs though, but I guess I can just sum them down.

The Reverb class has the processMono method to process mono signals into mono. Also, the processStereo method really just downmixes the incoming stereo into mono before the reverb processing.

I’m not seeing this?

https://docs.juce.com/master/classReverbAudioSource.html#afda75efd33835198182fb38fb49cf967

OK, this is perhaps because I was talking about the ReverbAudioSource while you were talking about teh Reverb class - i’ll take a look at this

Reverb, not ReverbAudioSource. I guess the ReverbAudioSource does the mono processing if its source is mono. (ReverbAudioSource internally uses the Reverb class to do the reverb processing.)

1 Like

Have them in some kind of container that is a member variable of your AudioProcessor or whatever you are using to process your audio. As usual, you are advised to create them outside the audio thread code. So probably in your audio class’s constructor or in prepareToPlay. (If using the latter, of course taking care you don’t needlessly recreate them again by accident.)

If the signal flow is always going to be something like : audio file->reverb, you might also consider doing your own AudioSource subclass that combines the file playing and reverb processing. (And any other processing needed per playing audio file.) That way you wouldn’t need separate containers of AudioFormatReaderSources, ReverbAudioSources etc cluttering your main audio processing class.

1 Like