Multitrack Audio Playback with Midi Timecode

Hello,

I am relatively new to JUCE and have been spending time with the API writing sample code and experimenting, which has been going well. One thing I’d like to do as a personal project to further explore the API is to write a standalone backing track player (not a plug in). Essentially this would be a multi-track audio player which could play up to 32 tracks of audio on individual channels. Additionally I’d like to send Midi Time Code (not Midi clock) so that the audio can be sync’d to external sources such as lighting desks.

All is fine so far and I have even written some code and have gotten audio out etc… However, you almost immediately hit a few snags with this type of thing and I have some questions related to sync.

As near as I can tell so far it is relatively easy to wire up some AudioSource’s and blast out the audio. However these sources are out of sync as they know nothing about each other or their positions in time relative to each other. Does the API support sync’ing multiple audio tracks together and if so what should I be looking at in the API as I have not been able to find anything that would appear to do this.

The next question is around Midi Time Code. I have read the MTC spec and its fairly straight forward in regard to the messages that need to be sent but once again we are back to timing.

The audio thread is locked to the sample rate of the audio card (e.g., 44100 samples). If I’ve done my math right that would be 44100 / 30 fps / 4 Midi Quarter Frames = 367.5 samples. Which means every 367.5 samples (how do you deal with a half sample - that’s a separate question) I need to be sending a Midi Quarter Frame message for sync.

Ok fine, but what if the user is using a large buffer size such has 8192 samples (which is very likely with an app like this for audio stability)? Every getNextAudioBlock is requesting 8192 samples which means I can’t get the 367.5 sample granularity. In other words by the time I get the next request from the audio card, 22.29 midi quarter frames (8192/367.5) have passed! How can reconcile these two clock rates to get a constant stream of Midi messages?

Sorry for the long post but I have been trying to think this through and either I don’t understand a key principle or am missing something. Looking for some feedback.

Thank you

Jason

When getNextAudioBlock requests 8192 samples, that’s around 19 ms at 44100 Hz, so the audio you render then will not be heard for at least another 19 ms, plus a few ms more for getting it to and through the DAC (see getOutputLatencyInSamples but that may not include everything). So there is no tight relationship between when you render audio and when you should send the MTC MIDI events.

If you start the audio, then start sending MTC after however many ms getOutputLatencyInSamples tells you (as a time in milliseconds, as that’s what the MIDI output works with), then it will be initially sort-of in sync but will drift, so you have to keep track of how much audio you have played compared to how much MTC time has elapsed, and adjust the rate you are sending MTC.

I assume you are mixing your AudioSoruces with a MixerAudioSource? To sync the audio simply add all the audio sources before calling prepareToPlay on the mixer. After that you will be sure that the number of processed callbacks will always be same on all sources.

Well that is simple now isn’t it? :slight_smile: Thank you both for your input, I’ll give it a shot. However…

How do the sources initially sync? There is no “reference sample” data (for lack of a better term) being sent in the AudioSourceChannelInfo buffer so where is “sample 0” amongst all sources ? How does the MixerAudioSource know what sample to initially sync to amongst all sources?

At some point all the audios sources have to agree on a standard reference point and some class has to track their progress against that reference to remain in sync. In a DAW for example its when I hit play- that is sample 0 and all tracks start playing against that reference point in time. But in Juce as soon as I wire up an AudioSource, samples are being passed. They may be blank, but samples are moving (at least that is what I think I am seeing). There would have to be some type of flag that says “Hey, I am actually playing audio now mark this point in time and everyone sync to this”. Right?

Am I overthinking this? Missing something?

The AudioSourceMixer will not compensate for delays in the individual AudioSources - this means if a processBlock is delayed then all other AudioSoruces will also be delayed by the same amount - in fact you will hear a glitch in the audio.

Great, thank you. Time for more experiments… More to come! :slight_smile:

Hi,
I have three audio sources, which are connected to a MixerAudioSource. Then, I would like to start the audio from these audio sources in different moments (like itens in separate tracks in a DAW).
Is there any way to insert an offset before the AudioSources? For example, I would like to start AudioSource1 and AudioSource2 at time 0s, but I would like to start AudioSource3 at time 3s.
What would be the best way to implement this with JUCE?

I solved this in a previous engine by inheriting the class you want to time (make sure it is derived from PositionableAudioSource), keep a private offset member and override getNextReadPosition(), setNextReadPosition() and getNextAudioBlock() to reflect the offset. Make sure to be consistent about your offset (what means positive, into the track or before the track?)
You might need to adapt getTotalLength() as well to reflect, that the source is longer by offset.

Version two is to catch all places, where setNextReadPosition is called. Apart from that call, AudioSources just deliver the next block, regardless of the position.

Version one is more classes to write, version two is harder to maintain, since the logic is spread over the project.

Hope that helps.

thanks. I will try to implement in this way.