Is there a buffer feature above ProcessBlock function

Hello,

We used to run our computation on the big buffer and not sample by sample. I have a 48k sample rate and a 480 buffer size. So my processBlock function will be called every 10 ms with 480 samples.

What happened if my code takes more than 10 milliseconds to do the computation sometimes? Is there a buffer system above ProcessBlock that allows me to not lose any samples? Do I have in this case, to do the computation in a deported thread with a queue to not lose any sample? I supposed that I will have some latency issues with asynchronous computation in a dedicated thread?

Is there an easy way to detect then I am losing samples?

I tried to run a test and it looks like there is no buffering system. is it correct ?

Dummy way to test it :

void test_guiAudioProcessor::processBlock (juce::AudioBuffer& buffer, juce::MidiBuffer&)
{
auto totalNumInputChannels = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();
uint32_t host_sample_count = buffer.getNumSamples();
static int count = 0;

if (totalNumInputChannels < totalNumInputChannels)
{
    // Clear ouptuts
    for (auto i = 0; i < totalNumOutputChannels; ++i)
    {
        buffer.clear(i, 0, buffer.getNumSamples());
    }
    return;
}


if (count++ == 10)
{
    Sleep(70); // sleep 70ms;
    count = 0;
}

return;

}

Then a tried to record in Audacity and I can hear that the sound is jumping regularly.

Thanks,
Arthur.

Juce itself doesn’t do any additional safety buffering for you, but some DAW softwares might, under some circumstances. For example Reaper does a 200 millisecond prebuffering by default for already recorded tracks, which can help with/hide problems with plugins that have occasional CPU usage peaks. Why does your plugin have such peaks? Is it doing memory allocations or disk accesses or something like that?

Ouch. Ok thank you for the confirmation,

I need to do FFT with very big window size and also try some machine learning computation.
The problem is that my computation is really not spread over time. I am basically doing nothing most of the time and when I have enough data, I am doing very heavy computation.

No disk access and dynamic memory allocation in my process loop. I try to keep it clean so far.

No automatic way to easily detect lost samples?

If you have a really large FFT size etc, you probably won’t be able to meet the processing deadline when the FFT etc is performed. (I’ve myself experimented with FFT sizes up to one minute or so, no chance those could be calculated within the normal audio callback deadlines.)

So, you will probably need to do what you were thinking about already : a background thread that does the calculations, and you will need to add some additional latency for your plugin.

At the cost of increasing the latency, for machine learning you could split up the calculations across multiple blocks, for example perform one layer every 32 samples, so if you have 10 layers the full inference pass is spread out over 320 samples. This doesn’t gain you anything if all 320 samples are part of the same block, but it prevents you from doing one massive calculation that might miss the deadline for the current block.