Modifying & extending juce::Reverb

Hey all,
I’m modifying the built in juce::Reverb class and I have some questions about it’s current design and performance. Here’s the docs↗ and the source↗ for reference. Please understand that I’m more of a “programmer” than an “audio programmer” or “C++ programmer” so I might be asking some rookie questions here.

  1. On line 140 of the source, you see this code in processStereo:
            const float input = (left[i] + right[i]) * gain;

Why are the left and right channels added together before the reverb is applied? Doesn’t this preclude the possibility of a meaningful width parameter, which is supposed to introduce stereo bleed in the reverb at low width, while maintaining distinct left/right reverberations at high width? See the source at lines 76-77, when the left/right gains are calculated in setParameters:

        wetGain1.setTargetValue (0.5f * wet * (1.0f + newParams.width));
        wetGain2.setTargetValue (0.5f * wet * (1.0f - newParams.width));

and 159-163, in processStereo when the reverb is mixed back into the source buffer:

            const float wet1 = wetGain1.getNextValue();
            const float wet2 = wetGain2.getNextValue();

            left[i]  = outL * wet1 + outR * wet2 + left[i]  * dry;
            right[i] = outR * wet1 + outL * wet2 + right[i] * dry;

All this effort to create separate gains for the left and right channel reverb, isn’t it wasted if we mix the inputs together before calculating the reverb?

  1. The built in module uses eight comb filters which accumulate in series, and four all-pass filters which also act in series. I’m experimenting right now with using eight all-pass filters which are applied to the individual outputs of the comb filters. I’ve read online that both approaches are valid, but can anyone with more experience designing reverb clue me into the pros and cons of each approach and how to tune them for the best sound?

  1. What is the pattern in how the lengths of the filters were decided? Lines 92-93 of the source:
        static const short combTunings[] = { 1116, 1188, 1277, 1356, 1422, 1491, 1557, 1617 }; // (at 44100Hz)
        static const short allPassTunings[] = { 556, 441, 341, 225 };

My client was surprised to learn about both the all-passes processing the accumulated comb outputs instead of each individually as well as lengths of their buffers. He expected the individual approach and for the all-pass filter lengths, when summed with the lengths of their associated comb filters, to equal a a fixed number. This would help align the the outputs in time.

I did a dirty little calculation with python, associating each all-pass filter with two comb filters, to see how they added up:

>>> cT = [556, 441, 341, 225]
>>> apT = [ 1116, 1188, 1277, 1356, 1422, 1491, 1557, 1617 ]
>>> sum = [ 0, 0, 0, 0, 0, 0, 0, 0 ]
>>> for i in range (0, 8):
		# Add the comb filter values to the sum
...     sum[i] += cT[i]
>>> for i in range (0, 4):
		# Add the all-pass filter values to sum at i and i+1 to surf the ramp
...     sum[i*2] += apT[i]
...     sum[(i*2) + 1] += apT[i]
>>> sum
[1672, 1744, 1718, 1797, 1763, 1832, 1782, 1842]

What does that translate to in the total time?

>>> max = max(sum)
>>> min = min(sum)
>>> timeShift = sampleRate * (max-min)
>>> timeShift

At a sampleRate of 48000, if its true that the filter buffer length determines the delay time of the samples, there could be up a 3.5ms mismatch between when samples are coming out of the different filters (note; I’m not if this is relevant in the case where the all-pass filters act on the accumulation, as in the provided source). Can anyone tell me what the pattern was for deciding the comb and all pass filter buffer lengths?

as far as i know you typically use values that don’t have much to do with each other to avoid resonances. you know, like multiples or so. i checked if these values are prime numbers and to my surprise none of them is

1 Like

Thanks Mrugalla. I might try changing them to prime numbers and see how that affects the sound.

I just found this blog post from @valhalladsp, I will have to give a read through of some of these to better understand the techniques.

It would seem the built in module is a Schroeder type reverb. From the descriptions they are robust but overall limited in the ability to replicate a natural sound.

My client is quite an audio freqk (in the best way) and isn’t satisfied with the mechanical overtones in the reverb output. We may have to move towards a feedback delay network, Dattarro reverb, or another model. Lots of learning to do!

It would be helpful if the comments in the Reverb class mentioned it is a Schroeder type, so newbies know what to look for to help them understand it better…

The docs mention it is the old FreeVerb algo, which means it’s not going to be particularly good.

1 Like

Thank you so much, I should have looked deeper into that. Now that I’m researching freeverb, I’m finding a lot of great resources including block diagrams, transfer functions for the allpass and comb filters, and an explanation of stereo spread. Much appreciated!

1 Like

To enrich the sound we are going to add clustered echoes on the comb filters. I.e. sampling not just at the tap number, but the tap number + 31, + 17, +7 for four clustered echoes at or around the tap. Additionally, we need to be able to enable or disable them on the fly.

I’m getting to work on implementing this. I haven’t used boolean parameters before so I was looking into them, and they seem like a mess:

Bool parameters With AudioProcessorValueTreeState - gives a best practice snip, imho it’s quite messy
Parameters - Best Practice - discussion
Behaviour of AudioProcessorValueTreeState Parameter and ButtonAttachment - unanswered
APVTS::ParameterChanged not being called for toggle buttons - unanswered

Since this is a standalone app, I’m just going to use the xml settings file and skip the value tree.

Hello all, I am trying to allow smoothly resizing comb filter sizes.

In the original Juce reverb it clears the buffer when setting the new size. There also isn’t any way included to modify the size at runtime.

I found that clearing the buffer introduces clipping/saturating that gradually decays into wobbling and finally, the normal reverb profile.

I tried calling realloc instead of malloc and found that only worsened the problem.

I’m thinking now that I’ll just have a second buffer and cross-fade between the two.