Output latency for various outs

Hi, I’m doing a plugins player with tracktion_engine and for this purpose I was doing some test using Nuendo and waveform and I notice an interesting thing that I’d like to do in my software that Nuendo (or cubase) has and that it seems work differently in waveform:

I set on both softwares latency to 64 and on both I created 2 tracks:

The first one goes to out 1/2 of my rme fireface ufx+ and has loaded in kontakt (with Alicia keys) and as efx Ozone 9 (with all subeffects loaded in to increase latency time, that is about 460millisecs).

The second one goes to out 11/12 of rme fireface ufx+ and has loaded only kontakt (with Alicia keys).

On both softwares playing with midi keyboard the first track glitches are generated and the latency is about 460millises.
But playing the second track (with first one still loaded in project) : on steinberg the output has only 2,5 millisecs and don’t generate glitches while in waveform it has the same latency of the first one and produces glitches. Is it because of latency compensation? I think on Nuendo latency compensation is applied to signal that goes on same output and I’d like to set this the same on my player because: let’s saying I’m playing a track with output to main track and in main track there’s a plugin that generates latency: sending the signal (through maybe a send plugin) in another track that has different out that goes to my in ear I could monitoring it without latency and playing it in real time. Is it possible to program with tracktion framework the same behaviour?

Another thing:

Setting sample to 2048 on Nuendo produces latency of 45 millisecs to all tracks: for example if in project I loaded only one track with a zero latency plugin signal goes out with 45 millisecs, while setting it in waveform to 2048 it says me that there are 42,7 millisecs of latency, but if I have only one track that has zero latency it is played in real time anyway… Why this?

I’m not sure it makes much sense to output different latencies to different devices. That would mean if you had something like a surround setup and plugins with latencies on only certain tracks, they would have a completely different timing to the other channels.

It’s also more complicated than your scenario as you might have a graph that crosses the two output e.g. with an aux send/return or Rack. It’s precisely these kind of edge cases I spent a year trying to fix with the audio engine re-write.

Sorry, I don’t quite understand the last bit. In general total latency is buffer size + longest path latency of plugins + audio device output latency.

1 Like

About last thing I know it and that’s was I expected to have… Maybe I’ll do more test to document my results.

About the previous thing… It makes sense what you’re saying in scenario of a main surround listening, but take this scenario in a live performance:

musicians are in live performance, all enter the same audio interface and all signals are processed in the same pc and in the same program.
In the main out, that goes to the audience, there’s a plugin that have 50 milliseconds of latency.
Each musician has a reference a click generated by an audio track (and so with 0 latency) that does not go to main output but only to tracks assigned to different outputs routed to in ears of musicians.
Tracks that are played live by musicians doesn’t have latency and are routed to main out and to outputs (with different volumes ecc) to in ears.
Let’s suppose that there’s also a track that play a sequence while they are playing with a plugin with 30 millisecs of latency.

As it works now all outputs have (to make things simple) 80 millisecs of latency (30 by audioSequenceTrack + 50 by mainOutputTrack). But:
If I play tracks with instruments I don’t want to have the main out signal back in my monitors, only signal from track I’m playing, click and other tracks played by other musicians or/and sequence Track so:
Who cares about 50 millisecs maintrack in in ear that are using only as monitors? And if a musician has back up to 30 millisecs of latency audio signal from a track not played by him, latency is not perceived, so if he have his 0 latency track with my instruments back in my monitors and a signal that arrives 30 milliseconds back I don’t care about latency compensation in my monitors, it’s only for a good results for the audience that there’s in main out! Also between musicians… All have the same click reference as a constant as guide and also without this if someone have a max latency of out of 2/3 millisecs of latency it should be an acceptable scenario to listen while I’m playing my 0 latency track the guitarist signal that in that moment is playing through a plugin with 3 milliseconds of latency.

Sorry for long message :pray:

Isn’t this all what direct monitoring from the desk is for?

Do you mean from an external mixer?
In all my real pro live situations where all is programmed to have high quality I don’t have an external mixer, but also in case where routing to monitors could be done from an external mixer things don’t change, from same pc and project musicans goes out of adudio interface to mixer with different latencies and in mixer is done the main mixing sending back to monitors different aux.

Example:
Vocalist have a track with 3 millisecs of latency that goes to out 1/2.
Keyboarder another with 0 latency that goes to 3/4.
A sequence track with 30 millisecs of latency to 5/6.
Click track with 0 latency to 7.
And maybe other.
They are connected to mixer and it gives back signals to monitors of musicians.

If you didn’t mean this explain me :pray:

Thank again!

Honestly, this isn’t something we’re looking to add any time soon. It would be a major rewrite and the only benefit seems to be a few ms reduced latency for some performers. Surely it’s easier to just use plugins with less latency? Or a smaller buffer size?

I’m hesitant to add an option to send differently timed audio out of different outputs because I can just see a lot of people misusing that. If you ever play those channels out loud together they’ll be a mess.

1 Like

Ok it is understandable and I realize the amount of work that would be involved in adding this feature, but as for the response of the users … I have just tried this also on logic and studio one which are as well as Nuendo and cubase some of them the current standards on the market and all work by default in the same way (in fact what I expected before the test as a user on waveform was this). The problem of non-synchronization on the outputs does not arise as the logic of these programs is the division into groups of output, which is mono / stereo / various surrounds, once the surround group has been set (for example with 5 + 1) all the signals that go to that output will be compensated differently than the same ones that go through auxiliary outputs to another output group x stereo example

Right, that’s as I though, that it might be better to do this when we have proper “bus” support.
There’s still the issue of cross-bus signals via send/return but I think it’s possible to do this via the new audio engine I’ve done.

1 Like