Wrong latency compensation in Logic

Hi!

I just found a weird issue with latency compensation in Logic X (at least 10.6.3 and newer versions).
When my plugin reports a latency of 3916 samples at 44.1kHz, Logic will compensate only 3915 samples. This value of 3915 is also shown in the tooltip that Logic shows when hovering over the plugin insert selector.

I tried some values around this one and found the following behaviour:
Value reported by plugin → Value compensated and shown by Logic
3915 → 3915
3916 → 3915
3917 → 3916
3918 → 3918

The function GetLatency() in the AU wrapper does not return a sample value but the time in seconds. So it returns the numer of samples divided by the sample rate.

I first thought this could a rounding issue inside the AU wrapper, but when using the same plugin in other DAWs (e.g. Reaper), the reported latency of 3916 samples is correctly displayed and compensated.

Has anybody here experienced an issue like that? Is this a known bug in Logic? Is there anything I could do to fix this inside my plugin?

Best Regards,
Gregor

Interesting. The values hint at issues related to numeric precision, or the discrepency between accurately rounding vs. flooring or truncating a value. Results might also vary depending on additional calculations and the exact evaluation of the equation, it can also make a difference whether it’s calculated at 32 or 64bit precision, and there is “relax IEEE compliance”. Reaper tends to calculate everything in double precision seconds (which does come with precision issues in real life as well). What sample rate did you use?

here is an example how quickly one can end up with “off by one” errors doing these kinds of calculations. quickly typed this in godbolt and abused the fact that optimizing compilers will put the result right into the code on the right side:

3916 samples turn into 3915… what kills it here is the use of the sample rate reciprocal, increasing imprecision when calculating the term 3916 * (1 / 41000) at 32bit float.

one might write code like this because division is expensive, and this reciprocal may be precalculated to gain some performance benefits. it feels like something like this could go on under the hood in logic, and i wouldn’t even consider it a bug, but a precision issue… I don’t think you can fix this in a non-hacky-workaround way.

This example gives 3915 because you used 41000 as the sample rate instead of 44100. Floating point numbers can be imprecise, but not that imprecise.

1 Like

Yes, obviously, please don’t write code like this :smiley: I simply wanted to show a hypothetical example for how “ignoring the topic” can lead to these kinds of issues, and what’s going on with logic feels like being related to this.

Also, always take compiler warnings of the sort “loss of precision” seriously, don’t use implicit casts, etc :wink:

Anyways, the actual number 41000 is not the problem, even though I made a typo code handling this should work at any sample rate.

Have another example with 48000 as samplerate. 3915 turns into 3914. The problem is still the reciprocal and the mixture of orders of magnitude that 32bit float can not cover with sufficient precision.

Sidenote: the problem escalates audibly if you do “synth oscillator phase increment per sample” in 32bit float. For that, 32bit float is indeed not good enough :wink:

What do you mean? 32 bit floats are used for phasor and phasor increments all the time. Are you talking about using a reciprocal in the calculation?

Yup, errors that result from precision loss can be a problem if you’re not aware of where they come from, how large they are, and how to avoid them. They may be perfectly fine for updating phasors, or they may be not. It depends on context, the actual math ops you’re combining, in what order, and on what sort of values.
Let’s say you have an f32 “increment per sample” that you add to another f32 value each sample. Whatever precision loss error there is, it could add up to the point where it’s a problem and it becomes noticable. Adding float a to float b 48000 times is not the same as multiplying a by 48000 in one operation.
I’ve observerd and measured errors in reaper projects (in very long sessions) that actually come from f64 double precision not being enough to be perfectly sample accurate. Those were related to snapping media items to exact samples in the timeline. Imagine a media item supposed to snap to an exact 96KHz sample transition at approx. time index 1h40min3.5231s. If you look into such reaper sessions, you’ll see these double values looking kinda sketchy, and that converting them back to a sample index is not an integer number.

Using reciprocals to trade divisions for multiplications is a good recipe for increasing precision issues since you usually can’t have the same precision in the large and the small portion of the number at the same time. Add 0.000001f to 10000000.f, it doesn’t work very well (numbers made up, not sure whether they can even be represented perfectly in f32, but you get the idea).

2 Likes