processBlock called for each sample?

Hello , I am looking for more info on when processBlock is called. is it possible for it to be called for each new (presumably 44.1kHz) sample?

processBlock is called according to the selected Audio Buffer Size, i.e. 512 samples, 2048 samples, etc. Whether it’s 44.1 or 48k or 96k doesn’t matter. Although at 96k it would be presumably called twice as often for the same Buffer Size as 48k.

In an AudioApp, you can specify that setting in the Audio/MIDI Setup, but in a plugin, it is up to the host’s settings. And it is not always guaranteed to be the same size. But it’s not possible AFAIK to have a buffer size of 1 sample.

1 Like

But it’s not possible AFAIK to have a buffer size of 1 sample.

Some hosts may occasionally go even that small for example because of parameter automation, but it certainly isn’t a usual case.

1 Like

then is processBlock called every N samples? (N being the buffer size). what I am trying to achieve is a real time circular buffer where I can process between each sample. is this something Juce framework supports?

FL Studio is good to test with, as sometimes the incoming block is 1 sample long!

I think it’s historically split up to match the block to automation events, for plug-ins that don’t use the correct incoming MIDI timing properly, but that’s just a guess.

This is why I always treat in the incoming block as a single sample stream, and stuff my own buffers if I need an FFT for example.

1 Like

ok for real time applications however , latency between the input and output needs to be no more than 20ms if I remember correctly. The latency between the recording of the input audio block samples and the playback of output audio block samples would be (1/SamplingReq)*BufferSize correct?? (assuming we aren’t stuffing our own larger buffers and are just working with what the host gives us). Then real time processing applications would need to have buffer sizes that are small enough not to cause audible latency detections.

You’re at the mercy of the host. It will always be the host that sets the latency. Most have an option to set it as low as the limitations of your hardware ( I have a usb2 audio device and I can set it in Live to about 5ms). This host option represents your live playing latency.

AudioProcessor::prepareToPlay just gives you the maximum block size you can expect in the next feed session. Which is useful for your internal memory management.


it’s easy to deal with. if you need any buffers that relate to the block you resize them in prepareToPlay. in processBlock you use buffer.getNumSamples() which is a value between including 0 and the block’s size. a ring buffer is also being resized in prepareToPlay, but ofc its length has nothing to do with the block but with whatever your dsp idea is. special case to consider is numSamples == 0 btw. hosts sometimes do that to say that there’s been silence in both the in- as well as output. in simple plugin projects you can immediatly return when this is true. i think it should be part of the code in the plugin template of the projucer btw, because no one ever thinks of that from the beginning

1 Like