Offsetting the compounding inaccuracy of samples per duration of time

I’m working on a little arpeggiator. As is pretty typical (I think), the duration of each note is represented as some number of samples. For example, if the arpeggiator is playing one note per beat at 130 bpm with a sample rate of 44100, the number of samples per note works out to be 20353.84. Of course fractional samples make no sense, so this has to be rounded to be an integer. By rounding, you’re losing a tiny bit of a accuracy. Not a big deal in most cases. Still, it bothers me because this minuscule amount of inaccuracy compounds with each note. After some (admittedly crazy large) number of notes, the gradual drift off-tempo will become noticeable.

Is this just the nature of using samples to represent a duration of time? Are there any techniques to offset the compounding inaccuracy?

Yes, just keep everything as `double` until the last moment when you actually need an `int`, this way the fractional parts can be “kept” as part of the accumulation.

I had this same problem by accumulating `int`s, and it’s actually really noticeably out of time by about the 100th bar or so in a song, switching to `double`s fixed it.

2 Likes

AH, that makes so much sense. Thank you!

beware that you can have drift with double as well if you accumulate the double precision error.

You don’t really have to accumulate anything, though, do you? How about calculating the nearest sample for the nth beat beat, instead? The calculation time is negligible. You’re just doing a multiply-and-divide to compute the next sample using the next beat number, rather than incrementing a stored value by a pre-computed number of samples per beat. No accumulated errors, then.

(If there are tempo changes, I guess you’d have to compute the sample starting from the sample where the last tempo change happened, but still not a difficult task.)