Oversampling in parallel processing chains

So I’m trying to implement oversampling in a plugin, and I think I have a basic idea of how it’s supposed to work. The problem is the only info I can find on it is in a context where there’s just one continuous processing chain. The plugin I’m writing is a dual-band compressor/distortion with an eq and limiter at the end, so my signal chain looks like;
input signal => input gain => duplicate => filter/distortion/compressor => merge => eq => limiter/gain => output

So I guess my question is, if I were to oversample this, how would I go about it? Would I somehow oversample at the beginning, duplicate and merge the oversampled block, then downsample at the end? or would I have to duplicate, upsample, process, downsample, and then merge them, then have to oversample the output section seperately? Would I have to create separate instances of juce::dsp::Oversampling for each separate occurrence in the latter case?

Honestly I think I have a dubious understanding of how buffers and audio blocks work in Juce. Like if I’m using multiple buffers, how does Juce know what ends up in the output signal? Or when I process an audio block, does that affect the buffer the block is created from? This may be outside the scope of my issue but I feel like this is why I don’t understand the oversampling concept.

This is my current process block, I know it’s probably a little disorganized, but hopefully it helps explain what it is I’m trying to do;

void AudioProcessor::processBlock (juce::AudioBuffer& buffer, juce::MidiBuffer& midiMessages)
{
juce::ScopedNoDenormals noDenormals;
auto totalNumInputChannels = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();

for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
    buffer.clear (i, 0, buffer.getNumSamples())

buffer1.makeCopyOf(buffer);
buffer2.makeCopyOf(buffer);

float inputLvl = *apvts.getRawParameterValue("INPUT");

float ch1Cutoff = *apvts.getRawParameterValue("CUTOFF1");
float ch1Dist = *apvts.getRawParameterValue("DIST1");
float ch1Thresh = *apvts.getRawParameterValue("THRESH1");
float ch1Ratio = *apvts.getRawParameterValue("RATIO1");
float ch1Attack = *apvts.getRawParameterValue("ATTACK1");
float ch1Release = *apvts.getRawParameterValue("RELEASE1");
float ch1Lvl = *apvts.getRawParameterValue("LEVEL1");
bool ch1Mute = *apvts.getRawParameterValue("MUTE1");
bool ch1CompOn = *apvts.getRawParameterValue("COMPACT1");
bool ch1DistOn = *apvts.getRawParameterValue("DISTACT1");

float ch2Cutoff = *apvts.getRawParameterValue("CUTOFF2");
float ch2Dist = *apvts.getRawParameterValue("DIST2");
float ch2Thresh = *apvts.getRawParameterValue("THRESH2");
float ch2Ratio = *apvts.getRawParameterValue("RATIO2");
float ch2Attack = *apvts.getRawParameterValue("ATTACK2");
float ch2Release = *apvts.getRawParameterValue("RELEASE2");
float ch2Lvl = *apvts.getRawParameterValue("LEVEL2");
bool ch2Mute = *apvts.getRawParameterValue("MUTE2");
bool ch2CompOn = *apvts.getRawParameterValue("COMPACT2");
bool ch2DistOn = *apvts.getRawParameterValue("DISTACT2");

float band1Lvl = *apvts.getRawParameterValue("BAND1");
float band2Lvl = *apvts.getRawParameterValue("BAND2");
float band3Lvl = *apvts.getRawParameterValue("BAND3");
float band4Lvl = *apvts.getRawParameterValue("BAND4");
float band5Lvl = *apvts.getRawParameterValue("BAND5");

float limitRelease = *apvts.getRawParameterValue("LRELEASE");
float limitThresh = *apvts.getRawParameterValue("LIMIT");
float outputLvl = *apvts.getRawParameterValue("OUTPUT");

inputGain.setGainDecibels(apvts.getRawParameterValue("INPUT")->load());
inputGain.setRampDurationSeconds(0.02f);
juce::dsp::AudioBlock<float> input1GainBlock(buffer1);
juce::dsp::ProcessContextReplacing<float> input1GainContext(input1GainBlock);
inputGain.process(input1GainContext);
juce::dsp::AudioBlock<float> input2GainBlock(buffer2);
juce::dsp::ProcessContextReplacing<float> input2GainContext(input2GainBlock);
inputGain.process(input2GainContext);

juce::dsp::AudioBlock<float> lowBlock(buffer1);
juce::dsp::AudioBlock<float> highBlock(buffer2);
for (int channel = 0; channel < totalNumInputChannels; ++channel)
{ // = buffer.getWritePointer(channel);
    // ..do something to the data...
    auto* channel1Data = buffer1.getWritePointer(channel);
    auto* channel2Data = buffer2.getWritePointer(channel);        
    
    lowComp.setAttack(ch1Attack);
    lowComp.setRatio(ch1Ratio);
    lowComp.setRelease(ch1Release);
    lowComp.setThreshold(ch1Thresh);

    highComp.setAttack(ch2Attack);
    highComp.setRatio(ch2Ratio);
    highComp.setRelease(ch2Release);
    highComp.setThreshold(ch2Thresh);


    auto lowChFilter = Coef::makeLowPass(getSampleRate(), ch1Cutoff);
    auto highChFilter = Coef::makeHighPass(getSampleRate(), ch2Cutoff);

    lowPass[channel].coefficients = *lowChFilter;
    highPass[channel].coefficients = *highChFilter;


    auto newLowBlock = lowBlock.getSingleChannelBlock(channel);
    auto newHighBlock = highBlock.getSingleChannelBlock(channel);

    juce::dsp::ProcessContextReplacing<float> lowContext(newLowBlock);
    juce::dsp::ProcessContextReplacing<float> highContext(newHighBlock);

    lowPass[channel].process(lowContext);
    highPass[channel].process(highContext);

    for (int sample = 0; sample < buffer.getNumSamples(); sample++) {
        if (ch1Mute == false) {
            if (ch1DistOn == true) {
                *channel1Data = tanh(*channel1Data * ch1Dist);
                *channel1Data /= 10;
            }
            else {
                *channel1Data *= 10;
            }
        }
        else {
            *channel1Data = 0;
        }
        if (ch2Mute == false) {
            if (ch2DistOn == true) {
                *channel2Data = tanh(*channel2Data * ch2Dist);
                *channel2Data /= 10;
            }
            else {
                *channel2Data *= 10;
            }
        }
        else {
            *channel2Data = 0;
        }

        channel1Data++;
        channel2Data++;
    }
}
if (ch1CompOn == true) {
    juce::dsp::ProcessContextReplacing<float> lowCompContext(lowBlock);
    lowComp.process(lowCompContext);
}
if (ch2CompOn == true) {
    juce::dsp::ProcessContextReplacing<float> highCompContext(highBlock);
    highComp.process(highCompContext);
}
ch1Gain.setGainDecibels(apvts.getRawParameterValue("LEVEL1")->load());
ch1Gain.setRampDurationSeconds(0.02f);
juce::dsp::AudioBlock<float> ch1GainBlock(buffer1);
juce::dsp::ProcessContextReplacing<float> ch1GainContext(ch1GainBlock);
ch1Gain.process(ch1GainContext);

ch2Gain.setGainDecibels(apvts.getRawParameterValue("LEVEL2")->load());
ch2Gain.setRampDurationSeconds(0.02f);
juce::dsp::AudioBlock<float> ch2GainBlock(buffer2);
juce::dsp::ProcessContextReplacing<float> ch2GainContext(ch2GainBlock);
ch2Gain.process(ch2GainContext);
for (int channel = 0; channel < totalNumInputChannels; ++channel)
{ 
    
    buffer.copyFromWithRamp(channel, 0, buffer1.getReadPointer(channel), buffer1.getNumSamples(), 1.f, 1.f);
    buffer.addFromWithRamp(channel, 0, buffer2.getReadPointer(channel), buffer2.getNumSamples(), 1.f, 1.f);

    auto band1Filter = Coef::makeLowShelf(getSampleRate(), eq1Freq, 1, juce::Decibels::decibelsToGain(band1Lvl));
    auto band2Filter = Coef::makePeakFilter(getSampleRate(), eq2Freq, 1, juce::Decibels::decibelsToGain(band2Lvl));
    auto band3Filter = Coef::makePeakFilter(getSampleRate(), eq3Freq, 1, juce::Decibels::decibelsToGain(band3Lvl));
    auto band4Filter = Coef::makePeakFilter(getSampleRate(), eq4Freq, 1, juce::Decibels::decibelsToGain(band4Lvl));
    auto band5Filter = Coef::makeHighShelf(getSampleRate(), eq5Freq, 1, juce::Decibels::decibelsToGain(band5Lvl));

    limiter.setRelease(limitRelease);
    limiter.setThreshold(limitThresh);

    *eqChain[channel].get<posInChain::eqBand1>().coefficients = *band1Filter;
    *eqChain[channel].get<posInChain::eqBand2>().coefficients = *band2Filter;
    *eqChain[channel].get<posInChain::eqBand3>().coefficients = *band3Filter;
    *eqChain[channel].get<posInChain::eqBand4>().coefficients = *band4Filter;
    *eqChain[channel].get<posInChain::eqBand5>().coefficients = *band5Filter;
    
    juce::dsp::AudioBlock<float> eqBlock(buffer);

    auto newEqBlock = eqBlock.getSingleChannelBlock(channel);

    juce::dsp::ProcessContextReplacing<float> eqContext(newEqBlock);
    eqChain[channel].process(eqContext);

}
juce::dsp::AudioBlock<float> limitBlock(buffer);
juce::dsp::ProcessContextReplacing<float> limitContext(limitBlock);
limiter.process(limitContext);

outputGain.setGainDecibels((apvts.getRawParameterValue("OUTPUT")->load() + (limitThresh)));
outputGain.setRampDurationSeconds(0.02f);
juce::dsp::AudioBlock<float> outputGainBlock(buffer);
juce::dsp::ProcessContextReplacing<float> outputGainContext(outputGainBlock);
outputGain.process(outputGainContext);

There’s quite a lot to unpack there, but I’ll touch on the block part. An audio block is a lightweight wrapper to a buffer. Processing a block is really just manipulating the underlying buffer. The 2 options you have here are via the processContext configuration you use (replacing vs nonreplacing). One writes the processed data back to the source buffer, the other takes the source but writes the processed data into another buffer.

The oversampling class contains its own internal buffer which holds the upsample data, this is why the upsample function returns a block, that block is a wrapper over the internal buffer to the oversampling class.

As to your initial question on process flow, either can be viable, however be aware that oversampling can come with phase shifts when using a nonlinear phase filter. And if using a linear phase filter, you will need to manage the delay compensation on other bands. It’s likely easier to just do the resampling at the start and end of the entire chain unless you have to narrow it down for cpu reasons.

That’s all very helpful information, thank you very much!
I was having trouble with it the way I was trying it earlier where when I turn on the oversampling function it just pushed the clean unprocessed signal through, but now I suspect that I just wasn’t processing the correct block or something. I may be able to troubleshoot from there, thank you again.

Okay, I lied. I thought I had some ideas but I’m lost. How would I go about duplicating an audio block to process independently of each other and then merge them later? I assume I would have to merge them again before downsampling, but I’ve tried it a few different ways with no luck. The first thing I’ve tried was;

juce::dsp::Oversampling<float> oversampler; //in my header

juce::dsp::AudioBlock<float> oversampleBlock(buffer);
juce::dsp::AudioBlock<float> upsampledBlock(buffer);

upsampledBlock = oversampler.processSamplesUp(oversampleBlock);
juce::dsp::AudioBlock<float> upsampledCh1(upsampledBlock);
juce::dsp::AudioBlock<float> upsampledCh2(upsampledBlock);

//some gain controls for each channel

for (int channel = 0; channel < upsampledBlock.getNumChannels; ++channel)
{
    auto* channel1data = upsampledBlock1.getChannelPointer(channel);
    auto* channel2data = upsampledBlock2.getChannelPointer(channel);

    //some filter stuff
    for (int sample = 0; sample < upsampledBlock.getNumSamples(); sample++)
    {
        //some distortion stuff using channel1data and channel2data;
    }
}
oversampler.processSamplesDown(oversampleBlock);

I get LOADS of artifacts if I leave the sample block in. It’s hard to tell without the distortion present, but I suspect that instead of splitting the signal and processing individually, it’s processing the same signal multiple times.

Is it just writing everything to the original oversampleBlock in this case? If so, is there a better way to duplicate an audio block to avoid this?

I’ve also tried duplicating the buffer itself the way I had my code working before trying to implement oversampling, but I seem to have more trouble with that;

//header;
juce::dsp::Oversampling<float> oversampler;
juce::AudioBuffer<float> buffer1;
juce::AudioBuffer<float> buffer2;

juce::dsp::AudioBlock<float> oversampleBlock(buffer);
juce::dsp::AudioBlock<float> upsampledBlock(buffer);

buffer1.makeCopyOf(buffer);
buffer2.makeCopyOf(buffer);

upsampledBlock = oversampler.processSamplesUp(oversampleBlock);
juce::dsp::AudioBlock<float> upsampledCh1(buffer1);
upsampledCh1.copyFrom(upsampledBlock);
juce::dsp::AudioBlock<float> upsampledCh2(buffer2);
upsampledCh2.copyFrom(upSampledBlock);

//some gain controls for each channel

for (int channel = 0; channel < upsampledBlock.getNumChannels; ++channel)
{
    auto* channel1data = upsampledBlock1.getChannelPointer(channel);
    auto* channel2data = upsampledBlock2.getChannelPointer(channel);

    //some filter stuff
    for (int sample = 0; sample < upsampledBlock.getNumSamples(); sample++)
    {
        //some distortion stuff using channel1data and channel2data;
    }
}
buffer.copyFromWithRamp(channel, 0, upsampledCh1.getChannelPointer(channel), upsampledCh1.getNumSamples(), 1.f, 1.f);
buffer.addFromWithRamp(channel, 0, upsampledCh2.getChannelPointer(channel), upsampledCh2.getNumSamples(), 1.f, 1.f);

oversampler.processSamplesDown(oversampleBlock);

Using the buffers this way seems to make it so that no processed signal makes it to the output at all, no matter what I do to rearrange or modify it.
I’m at a loss, clearly there’s a big chunk of the puzzle I’m missing here.

Sorry for the lengthy response, I try to be as descriptive as possible to give as full a picture I can of the issue I’m running into to try to eliminate some of the back-and-forth.