Downmixer plugin processBlock

I finally got round to reading the docs for processBlock()… and I have questions.

if your processor has 5 inputs and 2 outputs, the buffer will have 5 channels, all filled with data, and your processor should overwrite the first 2 of these with its output. But be VERY careful not to write anything to the last 3 channels

Say I am working on a downmixer plugin doing LCR → Stereo. Say, it does a trim per channel using a dsp::Gain processor, and a waveshaper per channel with a dsp::WaveShaper, then ‘routes’ the left-in to left-out, right-in to right-out and C-in to L-out and R-out:

Ch1 -> L Gain -> L Shaper  -> Ch1
Ch2 -> R Gain -> R Shaper  -> Ch2
Ch3 -> C Gain -> C Shaper --> Ch1
                          \-> Ch2

(I understand there is not actual signal flow here and that there are 3 buffers being read-from and written-to in-place.)

So, I break my buffer up into 3 AudioBlocks - 1 per input channel. Let’s call the channel buffer state at the beginning of the block ‘Ch# In’ and at the end of the block ‘Ch# Out’.

  • For Ch1, I can use processContextReplacing - Ch1 In replaces and becomes Ch1 Out.
  • For Ch2, I can use processContextReplacing - Ch2 In replaces and becomes Ch2 Out.
  • For Ch3, I must consider Ch3 as ‘read-only’, so I must use processContextNonReplacing and pass it a new AudioBlock to put the processed samples in. I obviously cannot dynamically allocate this buffer in processBlock and I cannot be sure how big it needs to be (variable block size).

How should I create/allocate this AudioBlock? HeapBlock member variable? C-style array on the stack, created in processBlock?

Does it have to be in AudioBlock? If you just work on the AudioBuffer in the processBlock, it is quite trivial.

But you can use the ProcessContextReplacing, it will give you access to all channels. There is no technical thing stopping you from altering the data in the input only channels, hence the warning in the docs.

The original AudioBuffer you get from the host is always replacing. The only difference between replacing or non replacing is, that the input pointer is different from the output pointer. It is the callers responsibility though, i.e. when you call the dsp.process() function.

Please stay clear of that AudioBlock/HeapBlock constructor. It is the greatest mistake in the code base IMHO. An AudioBlock should be seen as easy copyable view on audio data.
You got five channels that’s all you need.

I think it can be written neatly:

auto left = block.getSingleChannelBlock(0); // FIXME: use better approach for channel indices
auto right = block.getSingleChannelBlock(2);

// sadly there are no const overloads for addWithMultiply afaics
auto centre = block.getSingleChannelBlock(1);
auto leftRear = block.getSingleChannelBlock(3);
auto rightRear = block.getSingleChannelBlock(4);

// completely made up numbers
left *= 0.707;
right *= 0.707;
left.addWithMultiply (centre, 0.5f);
right.addWithMultiply (centre, 0.5f);
left.addWithMultiply (rightRear, 0.2f);
right.addWithMultiply (leftRear, 0.2f);

Thank you for your advice, Daniel. Sorry if I’ve missed something, but I don’t see the waveshaping step in your code. I see how I can perform the downmix in place, by continuously adding to the output channels, but what do I do with the result of an intermediary step (like a waveshaper), when I may not write over that channel or overwrite data in any other channel?

I think that would go in a dsp::ProcessorChain?

I’ll try some stuff and report back.

On a slight tangent… why should AudioBlock(HeapBlock&, …) be avoided?

Oh. I see.

An AudioBuffer as a member variable seems to work quite nicely as a place to put intermediate data. The buffer can be allocated upfront, has a couple of features to discourage allocation on the real-time thread, has some basic handling for variable channel counts and is obviously compatible with the good AudioBlock constructor or AudioBuffer functions.

Thank you for the help, Daniel.