Correct way to use ProcessContextNonReplacing

Hi Im Im trying to use ProcessContextNonReplacing as I have a number of different channels that need to process the input data independently them mix them together.

I have a buffer for each channel, I then construct a ProcessContextNonReplacing using the buffer as input and one of the channel buffers as output. Finally I mix them all together using a ProcessContextNonReplacing comprising of each channel as the input then the buffer as the output. The final step is another processor that does a little eq which uses the ProcessContextReplacing with just the buffer.

During testing only the final eq part seem to be processed all the channel processing using the ProcessContextNonReplacing seem to be not working, they are just acting like a pass through?

Here is a snippet of code of how Im constructing the ProcessContextNonReplacing parts, I I swap this to a ProcessContextReplacing and copy to and from buffers things seem to work ok but obviously thats a lot of copying that I should not need to do if I can get ProcessContextNonReplacing working:

 void processBlock(AudioBuffer<float> &buffer, MidiBuffer &midiMessages) override {
        using namespace dsp;
        ScopedNoDenormals noDenormals;
        juce::ignoreUnused(midiMessages);
...
       dsp::AudioBlock<float> inputAudioBlock(buffer);
       dsp::AudioBlock<float> outputBlock(buffer1);  //buffer1 is a field in this class
       dsp::ProcessContextNonReplacing<float> nonReplacingContext(inputAudioBlock, outputBlock);
       channel1processor.process(nonReplacingContext);

I have also tried just using the read/write pointers too:

        const auto numberOfChannels = (size_t) jmin(buffer1.getNumChannels(), buffer.getNumChannels(), buffer.getNumChannels());
        const auto L = (size_t) buffer.getNumSamples();
        AudioBlock<const float> inputAudioBlock(buffer.getArrayOfReadPointers(), numberOfChannels, L);
        AudioBlock<float> outputAudioBlock(buffer1.getArrayOfWritePointers(), numberOfChannels, L);
        ProcessContextNonReplacing<float> channelContext(inputAudioBlock, outputAudioBlock);
        channel1Processor.process(channelContext);

Am I doing something stupid that I just cant see?

If I do a manual copy and use a ProcessContextReplacing like this, then the processors in the channel processor work:

       buffer1.copyFrom(0,0, buffer, 0, 0, buffer.getNumSamples());
       buffer1.copyFrom(1,0, buffer, 1, 0, buffer.getNumSamples());
       dsp::AudioBlock<float> outputBlock(buffer1);
       dsp::ProcessContextReplacing<float> channelContext(outputBlock);

Help!

Did you set your buffer size correctly? And why not use ProcessContextReplacing if it works for you?
Also, use AudioBlock::copyTo / AudioBlock::copyFrom / AudioBuffer::makeCopyOf to reduce lines of code instead of copying each channel.

1 Like

Hi, thanks for the reply, I do size the buffers in prepareToPlay:

void prepareToPlay(double newSampleRate, int newSamplesPerBlock) override {
       ...
       sampleRate = newSampleRate;
       samplesPerBlock = newSamplesPerBlock;
       dsp::ProcessSpec spec{sampleRate, juce::uint32(samplesPerBlock), juce::uint32(channels)};
       buffer1.setSize(static_cast<int>(spec.numChannels), static_cast<int>(spec.maximumBlockSize));

I wanted to avoid having to copy the input as I have 6 channels so I thought using ProcessContextNonReplacing would help me optimise rather than having to do 6 copies.

Incidently the channelProcessor has a few processors this it itself calls Convolution and Gain, the Gain seems to work but Convolution doesnt. Weird that it works with the ContextReplacing though?

I’m pretty sure the NonReplacing is simply doing under the hood the same copy you want to avoid, so probably no optimization gain there.

You need to set your buffer size in proecssBlock (yes, each time) since buffers size may vary on each call. In prepareToPlay you get the maximum possible buffer size and pass it to dsp processor like convolution, but plain AudioBuffers usually don’t have nothing to do there.

So better stick with ContextReplacing and use buffer1.makeCopyOf(buffer) inside processBlock for your needs (for as many copies that you want to process).

1 Like

Set the size of your work buffers to the maxBlockSize parameter value in the prepareToPlay() callback. The offficial name of that parameter is “maximumExpectedSamplesPerBlock”.

Do not resize your buffers in processBlock (avoid allocations in that scope).

In processBlock, iterate over the number of samples from the host buffer, which can be different for each call, even zero, so make sure you support 0 to maxBlockSize.

If you do convolution, you really do not need to worry about the time is takes to do a few buffer copies, that is soo fast, compared with FFTs and complex multiplications. Make a reminder note for yourself to do timing measurements when everything works and then decide if you want to save a few (fragments of) microseconds.

If you want to use the dsp context approach, make sure that it is based on this more general Juce approach to sizing buffers in case you ever want to do something without the dsp module.

1 Like

Its kind of one of those things where I need to know why ContextNonReplacing just doesnt work though too. I do check the buffer sizes are correct during the process call, In Logic its the same same buffer but with PluginDoctor the buffer is always a lot bigger thats passed in, I see an assert in one of the processors if I dont set it.

Its a bit of a mystery why convolution doesnt like the non replacing contexts

Have you tried your setup with replacing convolution with any other “processor” from the dsp module, like chorus?

Are you sure this is related to convolution? Or is there a chance that the issue is in your code? Can you answer either question with 100% yes or no? :wink:

No idea what PluginDoctor does, but for me hostBuffer.getNumOfSamples () is the “truth”. Is that tool not reporting something else, like a Daw or audio interface block size?

I have a irLevelBalance Gain processor in the same class, and thats applying a -5.2dB reduction in gain and thats being processed, if I just switch the context to replacing then the convolution works, thats why I was asking if I had constructed the context properly.

    template <typename ProcessContext>
    void process (const ProcessContext& context) noexcept {
        if (context.isBypassed) {
            return;
        }

        convolution.process(context);
        //TODO set this or retrieve IR set by the cab/speaker/mic change matrix
        irLevelBalance.setGainDecibels(-5.20f);
        irLevelBalance.process(context);

        // Invert phase, if needed
        if (invertPhase) {
            phaseInverter.setGainLinear(-1.0f);
            phaseInverter.process(context);
        }
    }

I recommend to switch to replacing mode, it will make your code more compact and forces you to think in this “mode”, which is also how the plugin processor “mini-framework” works with its processBlock method anyway.

Keep any required work buffer, vector or samples copies local, where they are needed and where they make sense when you read the code (i.e. not on a higher, calling level).

If you would use a delay, what would be easier in your code:

process (srcDest, numSamples)

or

process (src, dest, numSamples)?

The semantic meaning is process, right? Not transfer-and-process…

I never use the dsp module, I find it a weird, inconsistent and confusing part within Juce, with its different integer types, “block” concept which is not a block but a bunch of refs/pointers, etc.

I hoped that some comments on “pattern level” might have helped.

I am sorry I cannot help with the specs, blocks, contexts and casts soup.

This stuff should in my opinion be refactored to the regular Juce pluginProcessor basics.

1 Like

I took your advice and stopped wasting time messing about with ContextNonReplacing, for whatever reason it just doesnt work properly, switch to replacing context and my job is done and I can work on the other important things!

1 Like

Cool man, congrats!