Can render realtime be predicted?

I’d like the exact time a sound is scheduled to be heard reported back to the GUI, the sound is being triggered by a button on my vsti. Surely processBlock in the processor will be called slightly ahead of when it’s really needed, is it always called a particular amount of time ahead? Is it measurable? Is this just something I’m never going to know to any meaningful sample accuracy and should just make do with a small amount of error? It’s not too important but if it’s an option I would like to know.

If the host tries to correct for latency, it will use whatever you set using AudioProcessor::setLatencySamples().
This is the amount of samples your signal leaves the chain delayed.

Is this what your looking for?

It could be. Do you mean to say that the processBlock() will attempt to be invoked getLatencySamples() before it is required to be played? so in theory, if the whole function took longer than getLatencySamples() then the audio would stutter?

The latency doesn’t have anything to do with what you are looking for. The plugins can report a latency amount for the host so that the host can do latency correction between its tracks.

Basically there is no way to do sync between the audio processing and GUI completely accurately. (But you may get close, at least if the host isn’t doing some long prerendering of the audio, which happens for example in Reaper by default.)

1 Like

If your processor tells the host by setting a number of say 512 samples, it would mean the filtered sample x would leave your processor 512 samples later.

If your host can do latency correction and the audio is not currently recorded but comes from disk, the host will send the samples 512 samples earlier, to get the result with the right timestamp to mix it into the processing chain.

No, the audio still needs to be processed as fast as possible. The latency is normally a result of the fact, that an algorithm needs more than one samples to compute the result, like the state of a filter.
There is no waiting involved anywhere…

EDIT: read your post again, I was only talking in audio time. For GUI it’s just what @Xenakios said, it plays no role.
At 44100 kHz a long block of 512 samples is 11 ms long, which means you are one frame late at 100Hz painting frequency, which is not recognisable…

that puts it into perspective, I hadn’t thought about the paint frame rate. I suppose if I were really interested I could always record the time of each render call and predict whereabouts inbetween the GUI button press came about.