processBlock vs getNextAudioBlock

I am an audio newb and a professional C++ developer. Please pardon my question if it misses the obvious

The AudioSourceChannelInfo parameter of the getNextAudioBlock() override allows me to determine which channels are active using getActiveInputChannels() and getActiveOutputChannels()

The AudioBuffer parameter of the processBlock() override allows me to get the number of channels with getNumChannels(). Should I assume that the channels returned by getNumChannels() are all active? How would I know which is an input channel and which is an output channel?

WHAT I AM TRYING TO ACCOMPLISH

I would like to adjust the Input and Output levels individually.

I’m pretty sure the channels in that AudioBuffer instance are both the input and output channels. that’s why it’s a reference, and not a const reference.

if you wanted to separate the two, you could have some other AudioBuffer member variable that you use as the output and for processing.

void processBlock(AudioBuffer<>& buffer, MidiBuffer& midiBuffer) {
     //obviously make this a pre-allocated member variable, not a local which allocates when copied.
    auto tempBuffer = buffer;
    tempBuffer.applyGain(...); //adjust your input gain.

    processDSP(tempBuffer);

    tempBuffer.applyGain(...); //adjust your output gain

    buffer = tempBuffer; //now copy it back
}

it’s just an idea.

Where are you using/seeing getNextAudioBlock()?

processBlock() is specific to AudioProcessor, which is what DAWs call automatically for you when your plugin is instantiated on a track.

I believe the getNextAudioBlock() is usually for audio application-related stuff you’re writing yourself.

processBlock (juce::AudioBuffer& buffer, juce::MidiBuffer& midiMessages),
has a reference to a juce specific Audiobuffer, filled with floats.

Of course you can get some information out of the methods belonging to the AudioBuffer class.

But to do some processing on the block you get a writepointer to the audiobuffer. and modify its contents. should you not want to write, get a readpointer.

you could perform some gain processing (input) on the whole block, do your processing, and do some final gain processing on the result (output). you could also do processing sample by sample.

Also, you might reference the API docs for more details.

Thanks!

I was experimenting with Tutorial: Processing audio input

It seems like getNextAudioBlock() is used for GUI projects

In the AudioBuffer::ApplyGain() call there’s a “channel” parameter.

How would I know which channel is my output channel and which is my input channel?

usually channel 0 is the Left channel in a stereo buffer.
channel 1 is the right channel in a stereo buffer.

You can experiment to find out what the DAW is sending for non-stereo buffers like this:

  • clear() the entire buffer,
  • use a dsp::Oscillator to fill a single channel in the buffer
  • look at the DAWs built-in metering to see which channel number in processBlock corresponds to the channel number you see lit up in the DAW.

Thanks. I’ll go through the docs again; I could have missed something.

For now, it seems like ** AudioSourceChannelInfo lets me know specifically whether or not a channel is for input or output; while AudioBuffer makes no such distinction between the two.

Again, that’s because processBlock is specific to Audio Plugins. the other function is meant for use when writing audio applications.

Good information.

I’ll fire up a DAW and start checking those meters and channels!

Input and outputs aren’t separate channels.

getNextAudioBlock and processBlock receive a reference to a block of audio data - that’s the input. You can then process the data in the given block to produce the output which you write back to the same buffer.

void getNextAudioBlock (const juce::AudioSourceChannelInfo& bufferToFill)
{
    const auto numSamples = bufferToFill.buffer->getNumSamples();
    const auto numChannels = bufferToFill.buffer->getNumChannels();

    // Iterate channels first.
    for (int channel = 0; channel < numChannels; channel++)
    {
        // Get a pointer to this channel's data.
        auto* data = bufferToFill.buffer->getWritePointer(channel);

        // Iterate over the samples in this channel.
        for (int i = 0; i < numSamples; i++)
        {
            auto sample = data[i];  // Read input from buffer.
            sample *= inputGain;    // Apply input gain.

            // ...
            // Do some other processing here...
            // ...

            sample *= outputGain;   // Apply output gain.
            data[i] = sample;       // Write output to buffer.
        }
    }
}

Interesting.

“data[i]” is both the input and output buffer?

Yes, they are the same, so for some types of processings you may have to for example make your own copy of the input buffer.