How does sample accurate host->plugin automation work?

I was just wondering how sample-accurate automation is working in VST plugins or even AU plugins.

AudioFilterBase::processBlock(…) only gets sample-accurate timestamped MIDI messages, but no sample-accurate timestamped parameter change events at all.
So how does it work for parameters to be sample-accurately automated, since they can only be modified via AudioFilterBase::setParameter(…) ?

Let´s say for example parameter #66 is modified at sample-positions 0, 7 and 8. Will this lead to a first call to setParameter at sample-position 0, then a processBlock(…) call with 7 samples being processed, then again to a call of setParameter(), then again a processBlock() of only 1 sample and then again a setParameter() ?

Is it done like that in hosts like Cubase ?
If yes, isn´t this switching a waste of CPU time when many parameters come into action with many many updates per quarter note?

I know that there once was a VstEvent type called kVstParameterType but it is now deprecated.

I would be very thankful if anyone could tell me how the whole thing works with modern sequencers, it guess they must have sample-accurate automation?

It’s not sample-accurate at the moment. I believe Steinberg are going to introduce that in VST3, but that’s not out yet.

I think AUs might have a way of doing it, but I’ve not supported it yet as it wouldn’t work for VSTs. At some point soon it’s one of the things I’ll need to address when I re-vamp the way parameters work.

But doing it the above mentioned way would of course lead to sample-accuracy - so are they at least processing really small blocks with processBlocks(…) then? I mean there must be some acceptable resolution involved in the automation process, independent of the soundcard´s latency setting??
As said, using the kVstParameterType VstEvent type, sample-accuracy was no problem and I implemented it in my plugins.

I’ve no idea if they do it like you suggest - all hosts will be different though. I suspect very few would break down the block size because it’d be so inefficient. Tracktion certainly doesn’t do this, it just sends all the param changes once per block.

So if the soundcards´s latency is 512 samples (an acceptable value), Tracktion only sends out parameter changes every 512 samples?

probably, though it might limit the block size if it’s much more than that. Can’t remember, TBH. I doubt if other hosts will be any different, because of the mess you’d get trying to process a chain of plugins all needing different, overlapping blocks.

That´s very interesting. Because following the rules of VST2.4 specs one really would be forced to split up sample processing into little chunks and call it interchangeingly with setParameter for sample-accurate parameter automation. It´s weird that the pre-VST2.0 specs were different (better) in that sense with the kVstParameterType VstEvents.

BTW I checked right now and read that Logic 7.1 and Cubase SX 1 do have sample-accurate automation, but since I am no AU expert I do not how it works there. :stuck_out_tongue:

Anyway, an automated volume change at an automation resolution of 512 samples of a pure, low-frequent (f.i. 60Hz) sine-wave will introduce very audible zippering noise, depending on the speed of volume change :?

Sure would. I’m sure anyone doing that kind of thing would filter it in their algorithm though.

Well, the point is that if Cubase SX´s or Logic´s timing really is sample accurate, then a high-resolution (let´s say 16 samples per parameter change) volume-change from minimum to maximum on the same sine-wave (playing for instance in NI Kontakt, or even just an Audio File) will be pure and exact as it should be.
I think I will really test how exact the automation is in Cubase SX or what is the best possible resolution.

Coding my synth, I noticed that a less than 4 samples (unsmoothed) VCA envelope accuracy on a 60Hz sine-wave becomes noticeable by ear.

Post-parameter-smoothing in the plugin itself can be useful though, but most plugins (sadly) don´t do it.

Are you certain about the sample accuracy of Cubase? Early versions of my plugins would crash if the buffer size wasn’t consistent, but they ran in Cubase SX1 with automation.

In general though, I think most hosts just use the ASIO buffer size.

FL Studio has taken a lot of flack over the years because they do use sub buffering to get sample accurate automation, and some plugs balk at getting a buffer of size 5 etc. NI Kontakt among them if I recall.

Either way, the VST SDK is ambiguous on this issue. Parameter changes could be passed to the process method in a list with delta times, but they aren’t.

Also, a good plug will expect the bunched up parameter changes and interpolate. This essentially puts you one buffer behind realtime, but as long as the operation is consistent, the user can compensate if it’s even noticeable.

I am certain that I read that Cubase SX has automation sample-accuracy. See .

My test consisted in drawing an automation line from 0 to max changing the volume via a plugin and also via the audio track volume fader.

The results:

Cubase SX3 -> +/- 700 to 5500 samples resolution, depending on ASIO latency (even for audio exporting; somebody please explain me why)
FLStudio7 -> FL Plugins -> 28 samples resolution
FLStudio7 -> VST plugins -> 200 samples resolution

I must say this really really annoys me, because coding all my plugins (most known is Reflex from Basement Arts) I suffered a lot to implement sample-accurate automation response. What for? Nothing.

I do not understand how Steinberg can have the guts to say their Cubase SX has sample-accurate timing as this is just a lie.

Old thread, I know. Just wondering what the state of sample-accurate automation is today. Possible with any of the plugin formats JUCE supports?