List of known Host curiosities

Hi everybody,

after finding out that Logic Pro X freely jumps around between different block sizes for its render callback, I would like to know if there are any resources of funky behaviour of different hosts (like a list with every non-standard things that could introduce bugs)

I know everybody says that the samplesPerBlock parameter in prepareToPlay is only an estimate, but every other host (Cubase, Reaper, GarageBand, Ableton, AULab, JUCE Plugin Host) I tested so far managed to keep it correct (or at least call prepareToPlay with a new block size).

BTW How do you guys manage different block sizes? Do you a) simply create bigger temp buffers or b) resize them and take the allocation drop out hit for the very rare case that the block size is different than the previous value? And if how much bigger is a safe value (2 times or 4 times)?


From what I've read about VST the prepareToPlay is an estimation however it's to the maximum. meaning blocks should be smaller or equal not greater.

Yeah, there's no such thing as a "safe" size, you really do just have to deal with whatever's thrown at you.

My suggestion for the best plan is: Allocate whatever size the prepareToPlay asks for (if you're really paranoid you could allocate 2x this), and then take a hit by resizing your buffer only if the required size increases.

It's unlikely this would ever happen more than a couple of times, and if it does you'll get away without any glitches in 99.99% of cases. And even if you're super-unlucky and it glitches, this will probably only happen when play is starting, or when the user begins doing something like scrubbing, so probably they'll blame the host and nobody will care ;)

FL Studio and Logic are the best examples of hosts with nonsense buffer sizes thrown into the processBlock function.

Indeed, the value you can get in prepareToPlay is supposed to be the highest which can be asked during rendering. I remember I had a hard time handling that for a few convolution algorithms I developed in the past, because asking for very low buffer size might cause a CPU consumption overhead in the best case, or a crash in the worst case. One of things I did to solve my issues was to provide objects ready to be used initialized in prepareToPlay for the maximum buffer size, but also for this size / 2, and this size / 4, so I can switch to the most optimal one depending on what happens. In Logic, it was critical because there is an option somewhere made to reduce / increase the latency at real time, made to be easily available and changed when the user is recording or mixing stuff. But one of the things that happens thanks to that option is having a buffer size in processBlock ALWAYS equal to the one provided in prepareToPlay divided by two...

Otherwise, I know a few other weird things happening in various hosts. One that comes to my mind right now is that Ableton Live doesn't care about the variable "isParameterAutomatable" of VST plug-ins...

Yes, unfortunately for some features (such as rapidly changing automation or the new LFO feature we added in T7) you need to process plugins in small blocks, changing plugin parameters in between.

Calling prepareToPlay with the new block size before each call would wreak havoc (lots of plugins use this to allocate buffers etc.) and wouldn't always be possible. For example, take the common 441 buffer size on Windows. How does this divide into a set of equal, small blocks?

The long and short of it is the buffer size can and will change between processBlock calls.


If you need a certain number of samples to process an effect one option is to push samples into a FIFO and then read them out when you have enough. E.g. you have an 'unprocessed' buffer and a 'processed' one. You push into the unprocessed, process when you can, then pull out the required number from the processed buffer.

Of course in order for this to work you'll need to report some latency and then balance the latency inside your plugin with some delay buffer.

Alright, I'll go for the edge-case reallocation method. When it comes to automation, it makes sense to divide the buffer. MIDI messages divide the buffer internally anyway,  so this should be no problem in my case.