I don't suppose anyone has a comprehensive summary of how this is handled by the various hosts?

The JUCE documentation rather enthusiastically encourages it to be called whenever latency changes.  But there are discussions online about how infrequently hosts actually respond to any changes.  And my brief experimentation suggests there are issues. 

I've written a plugin which changes from 0 latency to 20ms when an option is selected.  However instanting two of the plugins in Logic by inserting one, changing the setting and then duplicating the track results in the two tracks playing out of time. And as Jules notes it's pretty hard for a host to adjust the latency in realtime. 

However, Shifting track delays is the cause of studio nightmares!  So it'd be good to have an approach that didn't result in any uncertain behaviour. 

I do need to set the latency based on the sample rate, otherwise users of 44.1khz audio are going to be living with a long delay designed for 192khz users (probably a small group admittedly - but undoubtedly fussy customers).   So I'm going to try popping a call into prepareToPlay as well..

Any clues as to what definitely works would be greatly appreciated!


What setLatencySamples does

Calls to setLatencySamples perform two actions: 

(1) store the latency in the AudioProcessor object.  The host can called getLatency at any time to find out what this stored value was. 

(2) call or set API specifics.  These are: 

AAX: SetSignalLatency

VST(2) setIntialDelay sets a value in the cEffect structure. JUCE then calls a couple of change notification routines...

AudioUnit:  calls PropertyChanged (kAudioUnitProperty_Latency,       kAudioUnitScope_Global, 0);



However instanting two of the plugins in Logic by inserting one, changing the setting and then duplicating the track results in the two tracks playing out of time




I recently realized, with a similar setup, that after stop and starting again, logic was in sync again. If its not, its clearly a bug in the host software.






I personally assume that the host ony respects the setLatencySamples after a sample rate/buffer size change or after the audio engine has been restarted. This is probably too pessimistic. Any other suggestions.


FYI - I've started cataloguing the behaviour of hosts to setLatencySamples:

I've done all the sequencers I have on the laptop so far, and I'll going to have an experiment with more shortly ... 

PS. I can't replicate the Logic bug I was winging about in my original post now.  So that might have been my error. 

Hello jimc, do you happen to have this list still available somewhere online?

I don’t! But I’m prepared to collect the info again if someone sets up a reliable wiki :slight_smile:


No need to go that far if you don’t have the info at hand anymore!
I mean that would be super useful, but I imagine this is a time consuming task and I don’t know if others are interested as well.

Maybe you could share the global trend on what proportion of DAWs do call getLatency at given specific times?

My latency is a fixed time delay, so the number of samples depends on the sample rate and I set it in prepareToPlay.
I am facing a problem with soundforge 12, as it seem to adjust its latency based on a initial call at 44.1kHz and does not adjust during the real prepareToPlay call…
I would be curious to know if this behavior is a common one…

I think you’re doing it correctly and the host should pick it up, but as mentionned in this thread, latency reporting is an area where hosts show a lot of variation.

If you want a fixed number of samples latency, you could calculate the highest number of samples needed (probably for 192khz), report that and add extra delay for lower sample rates yourself during processing.

I am already doing something like this for a component of that delay that can continuously vary with a user parameter during processing. The delay is already 3x what it could be for most common cases, and I am worried that expanding it too much would make it less practical in some scenarios (eg real-time use).
If I where to account for a “worst case” scenario of a 768kHz rate, then the delay would be 16x what it should be at the much more common 48kHz rate.
I could probably ignore 768kHz considering the small percentage of users that would run a DAW at that rate (at least nowadays, who knows…), but then I would also need to compare it to the percentage of DAWs that I am trying to fix with this strategy.

In the end I might add a “real-time” button that would always try to use the lowest possible delay (including the part that can vary during processing), and another mode that would be conservative and add a large fixed delay…

People using more than 192 kHz certainly use expensive DAWs that can properly query latency ;).

Anyway - a better solution would be to get in touch with the Soundforge 12 devs and ask them why it’s not working - assuming the code is fine on your end and does work in other host applications.