I’m building an application that, amongst other things, is able to sync it’s audio (tracks) to midi clock. One of the hurdles i’ve encoutered is achieving accurate sync. When I recieve a Midi-start command, the audio has to start playing. Ofcourse this will not happen instantaniously, so my method for coping with this is to cut off a number of samples of the audio track, to create an offset.
In order for me to calculate this offset I need to know when a particular block of audio is going to hit the speakers. I’ve modified the juce corrAudio implementation so that it uses the coreAudio timestamps to calculate this latency. (this was actually pretty easy).
I would like to do the same with ASIO, and it seems capable of it (something with ASIOTime structs). But is seems that Juce’s ASIO implementation is not permitting this. (asioMessagesCallback returns 0 when it’s “asked” “kAsioSupportsTimeInfo?”)
Is there maybe any other way to achieve some sort of asio timestamping or am I just going to have to hack the juce implementation?
The AudioIODevice gives you a latency, though it’s the one reported by the coreaudio/asio device rather than being generated by timestamp hackery. Is that not accurate enough?
well when using coreAudio this just gives me 2 * blockSize (havent tested it yet on other interfaces). I suppose this could be right, but then i’d still need to know what the elapsed time between the callback’s origin and my syncing function is.
When i compare my calculated offset to the one from dev->getOutputLatency(), the difference between the two seem to get proportionally bigger as buffer sizes grow…
Syncing with my hacked AudioDeviceManager and coreAudio seems to be spot on for now, so maybe i’ll give asio a go too (I doubt getting reliable timing info from dsound will be easy…)
Have you played with the latency tester I just added to the juce demo? I’ve been tinkering with that lately, and it’s all a bit of a minefield, with different devices behaving completetly differently. Any insights you have would be appreciated!
I don’t know from a developement point of view, but I can tell you that in Ableton Live’s preferences, there’s a “Driver error compensation” field, in the latency section, and according to help, it’s there because “Some audio interfaces report incorect latencies”. Sounds to me like it’s the same problem you’re facing , isn’t it ?
Sync is a complicated problem, and it seems like it’s gotten worse as the rest of these systems have gotten better. Much old gear (standalone drum machines, etc.) seem to work much more reliably than a lot of state-of-the-art software from what I understand. This isn’t incredibly surprising considering the complexity of modern computer-based audio stacks, but I’m beginning to think the real solution might involve giving the user dynamic control, like a DJ. I guess hands-off would be best, but given the nature of drivers reporting dubious latencies, and the continual evolution of hardware abstraction, I’m starting to think letting the users have real-time control over the latency compensation might be the best approach.
Maybe more helpfully, I found a paper from the PortAudio project somewhat illuminating: http://www.portaudio.com/docs/portaudio_sync_acmc2003.pdf It would be great if sync had more attention paid to it, as when you start to think about it, synchronization (generically) is the essence of social music. Think drum circle meditations. Maybe it’s time our computer programs started acting more socially as well.