Multiband Processing.....on the right track?

Hi All, I’m fairly new to JUCE and as a project am making a multi-band “leveler” which just allows you to adjust gains in different frequency bands, with a moving crossover. I want to do 4 bands ideally, but to get started I did just two bands and it seems to behave as expected, with what sound like some phase issues. Here are my main questions:

  1. Will I necessarily need to make copies of the buffer input to processBlock()?? Right now inside the processBlock() function I create copies: band1Buffer and band2buffer, that get processed separately and then summed before output.

  2. The current processing needs to loop over the channels in order to create the copy buffers, then ends the loop to execute the commands necessary for the dsp-module processor chains, then loops over the channels again to sum into the buffer for output. This loop-end-loop structure inside processBlock() feels a little inelegant, but again I’m new and maybe this is the best way, given that the addFrom and copyFrom functions are designed to work on one channel at a time.

    3a. I’m aware from some research that the choice of filter is important for dealing with phase issues especially as we move into more bands. Currently I have a ProcessorChain which is a Duplicator of a StateVariableFilter, and a Gain. Is this just plain wrong?

    3b. My next task is to study up on the details of DSP filtering to better understand how the phase gets affected. I have the book “Designing Audio Effects Plugins in C++” by Will Pirkle, and some other online resources, but any other tips in the right direction are much appreciated, thanks!

Below I’m copying some of the relevant segments of my code, as well as a GitHub link to the full project
Link to Project:

Processor Chains:

       //===========DSP Processing Chains=============
    dsp::ProcessorChain<dsp::ProcessorDuplicator<dsp::StateVariableFilter::Filter<float>, dsp::StateVariableFilter::Parameters<float>>, dsp::Gain<float>> band1Chain;
    dsp::ProcessorChain<dsp::ProcessorDuplicator<dsp::StateVariableFilter::Filter<float>, dsp::StateVariableFilter::Parameters<float>>, dsp::Gain<float>> band2Chain;

prepareToPlay and UpdateProcessorChains methods:

void TwoBandLeveler_3AudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
mSampleRate = sampleRate;

 dsp::ProcessSpec spec;
 spec.sampleRate = mSampleRate;
 spec.maximumBlockSize = samplesPerBlock;
 spec.numChannels = getMainBusNumOutputChannels();



void TwoBandLeveler_3AudioProcessor::updateProcessorChains()
 float crossover = *pCrossover;
 float band1gain = *pBand1Gain;
 float band2gain = *pBand2Gain;

 band1Chain.get<0>().state->type = dsp::StateVariableFilter::Parameters<float>::Type::lowPass;
 band1Chain.get<0>().state->setCutOffFrequency(mSampleRate, crossover, (1.0 / MathConstants<double>::sqrt2));

 band2Chain.get<0>().state->type = dsp::StateVariableFilter::Parameters<float>::Type::highPass;
 band2Chain.get<0>().state->setCutOffFrequency(mSampleRate, crossover, (1.0 / MathConstants<double>::sqrt2));

ProcessBlock method:

 void TwoBandLeveler_3AudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
            ScopedNoDenormals noDenormals;
            auto totalNumInputChannels  = getTotalNumInputChannels();
            auto totalNumOutputChannels = getTotalNumOutputChannels();
            auto bufferLength = buffer.getNumSamples();
        for (auto i = totalNumInputChannels; i  < totalNumOutputChannels; ++i)
            buffer.clear (i, 0, buffer.getNumSamples());
        //update processor chains with any new settings from parameters
        //set up copy buffers to process each band seperately
        AudioBuffer<float> band1Buffer;
        AudioBuffer<float> band2Buffer;
        band1Buffer.setSize(totalNumInputChannels, bufferLength);
        band2Buffer.setSize(totalNumInputChannels, bufferLength);
        //copy input into each band's buffer
        for (int channel = 0; channel < totalNumInputChannels; ++channel)
            auto* channelData = buffer.getWritePointer (channel);
            auto* bufferData = buffer.getReadPointer (channel);
            band1Buffer.copyFrom(channel, 0, bufferData, bufferLength);
            band2Buffer.copyFrom(channel, 0, bufferData, bufferLength);
        //apply the filter and gain processing to each band using dsp module features
        dsp::AudioBlock<float> band1Block (band1Buffer);
        dsp::AudioBlock<float> band2Block (band2Buffer);
        band1Chain.process(dsp::ProcessContextReplacing<float> (band1Block));
        band2Chain.process(dsp::ProcessContextReplacing<float> (band2Block));
        //clear input and sum the bands back into the output buffer
            for (int channel = 0; channel < totalNumInputChannels; ++channel)
                auto* band1Data = band1Buffer.getReadPointer(channel);
                auto* band2Data = band2Buffer.getReadPointer(channel);
                buffer.addFrom(channel, 0, band1Data, bufferLength);
                buffer.addFrom(channel, 0, band2Data, bufferLength);

Take a look at Linkwitz-Riley Crossovers, even here on the forum. This is the way to go to split a signal into different bands. Make sure you align the phase with an Allpass before you sum them up again. You’ll find quite a lot about that in other threads here. You might even want to check out the JUCE6 branch, it brought a handy class to create a LR-crossover.


Oh and regarding making copies: best strategy here is add the copy-buffers as members to your processor and resize them in the prepareToPlay method. That way, you don’t have to allocate memory on a realtime thread which is a big no-no! :wink:

1 Like

Awesome thank you. Actually is there any reference for best practices regarding when to use global member variables and when to instantiate inside the scope of the function? I wrote this down on my big list of “C++ questions” but haven’t answered it yet. Thanks!!

Just nitpicking terminology: global is not member, but it is clear what you meant.

Generally: always try to limit the scope of variables as much as possible.
Putting variables on the stack is always unproblematic, allocations must not happen on the audio thread (like @danielrudrich pointed out).
Allocations are not always easy to spot. Those are examples:

  • Creating an object with new or std::make_unique
  • Resizing a container like AudioBuffer or std::vector and others
  • C-style like malloc, calloc

If you need an audioBuffer, you need to create it as member and call setSize() in prepareToPlay, which does NOT happen on the audio thread and is therefore safe. Also it is the host’s responsibility that processBlock() is not called before prepareToPlay finished.

There is a lot more to be said, but that’s the most important bits IMHO.


Awesome. Nitpicky is fine haha, like I said I’m pretty new but I’d rather get this all sorted out sooner rather than later. Thank you!

Hey Daniel,

Looping back here after a few weeks of remedial DSP theory for myself…I understand now that creating/using L-R filters is ideal for this application. Seem to be the gold standard for crossovers everywhere. I checked out the forums here and found the code that you posted (much gratitude) in which you calculate the coefficients for the 2nd order butterworth, and cascade to get the L-R filter. I read through the calculation of the coefficients and found it to be slightly different from the coefficients listed for the same filter in the book that I’ve been studying from (Will Pirkle Audio Plugins in C++).

Is there a definitive reference for how to calculate the coefficients? Are yours right and his wrong? Are there simply different “versions” of calculating the coefficients? I can see this being anything from just using different algebra (maybe they’re actually equivalent) to different methods for converting from the analog transfer function in the first place. It seems like there should be a definitive set of coefficients for 2nd order butterworth, short of the actual numerical calculation for a given cutoff frequency.

Just checked the coefficients in Daniel’s code and they’re right.

I don’t have the book to compare, but you can indeed calculate them in different ways. If you look at the simplified expressions in the image, they have both sin and cos. Using tan makes the coefficients more complicated but saves a call to a trig function.

Thanks for double checking!
Meanwhile I figured that you can simply use JUCE’s IIR filters, two 2nd order LPs and 2 2nd order HPs (butterworth -> q = 1 / sqrt(2)) and compensate the crossover with JUCE’s 2nd order Allpass (one of them). You should get the same coefficients with that.
Upcoming JUCE 6 even has a LR crossover class which I guess uses exactly that scheme (well, there’s no other :wink: )

1 Like

Figured it out. Pirkle’s book reverses the convention for which coefficients are labeled with “b” and which are labeled with “a”. Major facepalm moment for me, although admittedly this is a little confusing.

Thanks for your help on this!

Alright well I got a clean crossover using the JUCE makeLowPass and makeHighPass with the Q set to 1/sqrt(2). (Tried it with inputting the coefficients too but that seems to give me some very weird artifacts. More on this later…)

For now, since this pet project creates two bands with only a single crossover, I’m wondering if I need to use an all-pass to correct for the phase response of the filter. And if so, where do I put it in this case? Just after one of the bands? Like this??

-------->|                     + -------> 
         |--->LP1 --->AP1----->|

I’ve seen a few block diagrams but they have either 3 or 4 bands. I can’t find anywhere that explains how to choose the correct location and cutoff frequency for the AllPass filter in the chain. Any help or references to good explanations would be much appreciated.

Update…I tried the configuration above and the phase issues were worse than when I didn’t have an AllPass at all…seemed to be little if any phase issues on the single-crossover configuration with no AllPass…just

     |------>HP1 -------->|
---->|                    +------>
     |------>LP1 -------->|

Think about it this way. Each filter produces a phase/group delay. The high and low pass filters of a LR pair have the same delay. You need all the output branches of the split tree to have the same delay. So you make your tree, with all the splits you need, using LR pairs: LP1/HP1, LP2/HP2… until you get all your outputs. Then you go backwards from the outputs to the root. To get the same delay for all outputs, all paths should go through the same numbers. If a path goes 3-2-1 and another goes 2-1, you need an AP3 on the second one. In your example, you have one split, two outputs. Both paths go through just 1: LP1 and HP1, which have the same delay. So you don’t need an allpass. In other words, you compensate for the crossovers that an output didn’t go through.

1 Like

Amazing thank you. This makes complete sense. Presumably each filter has a phase shift that depends on the cutoff frequency so we just need to send the signal through phase shifts of the same amount to sum back to being in phase at the end. Brilliant. Seems like it’s true then that for a given cutoff frequency, the high pass/low pass (4th order L-R) have the same phase response, and that’s the also same as the AllPass (2nd order Butterworth) ?? Any insight on why that’s true?

Well, that’s by design. LR pairs where developed as speaker crossovers, and the design target was a flat amplitude response and a consistent phase response. There are other crossover designs that meet one condition but not the other. As to why it works, I’m not expert enough to give you an intuitive explanation, but the maths add up: if you chain two identical LPs or two identical HPs, you get the phase response of the AP of the same order. That is, these filters
have the same phase response, as do these ones