Adding an "accumulating buffer" to output buffer question

Hey.

I’m currently trying to add processing to each of my synths voices individually. In my attempt of achieving this, I have added an audiobuffer in my synthesiservoice class, which I’m using for accumulating samples in the renderNextBlock method, before processing them and adding them to the outputBuffer.

This, however, is not working as I intended.
It is somehow producing a clicking noise whenever a key is released, and I can’t really figure out what it is that I need to fix.

Here is the code:

void SynthVoice::renderNextBlock (AudioBuffer<float> &outputBuffer, int startSample, int numSamples)
{
    
    int tmpStartSample = 0;

    voiceBuffer.setSize(outputBuffer.getNumChannels(), outputBuffer.getNumSamples());

    for (int sample = 0; sample < voiceBuffer.getNumSamples(); ++sample)
    {
        for (int channel = 0; channel < voiceBuffer.getNumChannels(); ++channel)
        {
            voiceBuffer.addSample(channel, tmpStartSample, getEnvelope() * gain);
        }

        ++tmpStartSample;
    }

    //Here is where the dsp processing should be happening...


    for (int channel = 0; channel < voiceBuffer.getNumChannels(); ++channel)
    {
        outputBuffer.addFrom(channel, 0, voiceBuffer, channel, 0, voiceBuffer.getNumSamples(), 1.0f);
    }

    voiceBuffer.clear();
}

So my question is: What exactly is it that I’m doing wrong / How would I be able to fix this?

Any help would be greatly appreciated!

Still struggling to get this working, has anybody got any ideas as to what I’m doing wrong? I just can’t seem to find any examples on how to do this properly.

I can’t tell you, why it’s doing what it’s doing, but here some things that caught my eye:

This line is potentially allocating, so it is a no-go in the audio thread (renderNextBlock() is called from the audio thread).

Also setSize() won’t clear the buffer, so you start adding to random signal. It’s better to clear during the setSize() call instead of after processing:

voiceBuffer.setSize (outputBuffer.getNumChannels(), outputBuffer.getNumSamples(), false, true, true);

Or ideally use a proxy AudioBuffer, that references the long buffer but potentially holds a subset:

jassert (numSamples <= voiceBuffer.getNumSamples());
AudioBuffer<float> proxy (voiceBuffer.getArrayOfWritePointers(), voiceBuffer.getNumChannels(), startSample, numSamples);
proxy.clear();

Thank you so much for pointing that out!

It seems to be working as it should now, after a few hours of trying to sort it out.
For some reason, I can’t make it work by using addFrom when trying to merge the proxy or voicebuffer with the outputbuffer, but it works when I use a for loop. Had to do a few minor tweaks to the code too, like checking if the voice is active. Here is the final result:

void SynthVoice::renderNextBlock (AudioBuffer<float> &outputBuffer, int startSample, int numSamples)
{
    adsr.setParameters(adsrParameters);

    if (isVoiceActive())
    {
        jassert (numSamples <= voiceBuffer.getNumSamples());
        AudioBuffer<float> proxy (voiceBuffer.getArrayOfWritePointers(), voiceBuffer.getNumChannels(), startSample, numSamples);
        proxy.clear();


        int tmpStartSample = 0;

        for (int sample = 0; sample < proxy.getNumSamples(); ++sample)
        {
            for (int channel = 0; channel < proxy.getNumChannels(); ++channel)
            {
                proxy.addSample (channel, tmpStartSample, adsr.getNextSample() * getWaveform() * gain * 0.4f);
            }
            ++tmpStartSample;
        }

        //Processing here..
        
        for (int sample = 0; sample < numSamples; ++sample)
        {
            for (int channel = 0; channel < proxy.getNumChannels(); ++channel)
            {
                outputBuffer.addSample (channel, startSample, proxy.getSample(channel, sample));
            }
            ++startSample;
        }
    }
}

Doesn’t look that great, but will do for now. Atleast it’s finally working!