Live SMPTE sending

Hi,

I'm trying to build an application that recieves certain network packets with information in it from the clock it's playing (i recieve BPM, on what beat it is and information about the position in the song). I wrote a listener for that.

I already got a MIDI clock sender (with a highrestimer) that sends 24 messages for each beat (input is only 4 messages per beat).

Now I want to generate a SMPTE audio signal form this information with a delay as low as possible. But I'm not sure what the best way to do this (I don't have much experience with audio programming).

What would be the best way to do the SMPTE sender? 

- using pre recorded SMPTE and set the position of this with the playhead? (doesn't look like an "elegant" solution

- writing the audiosamplebuffer with the processblock and variables for what position the input signal is

- using processBlock(AudiSampleBuffer , MidiBuffer) where I first write things to the midi buffer and convert this in the processBlock to audio signals?

It has to be as realtime as possible so I gues setting the samplesize as high as possible and buffer as small as possible (without things being instable).

 

Thanks! Hope you guys understand my explanation :)

Having generated and read LTC timecode a lot over the years, it sounds like your general scheme might be problematic. 

LTC is a running 'clock', 24, 25, 29.97 (drop fram), or 30 frames per second, then counting frames, seconds, minutes, hours. Each frame is 80 'bits', with the bits being sent as a frequency modulation encoding scheme between approximately 4800 Hz and 2400 Hz (at 30 fps).

So the first potential problem is what you are converting from, to. There is a standard for sending SMPTE over MIDI, called MTC. You periodically get a complete time message, then short sub frame messages in between. In that case, you have the time you are receiving + MIDI latency and jitter, and the time you are sending. You can't instantly convert received MTC messages to a new timecode message because each frame must be sent in its entirety or the receiver will think it lost sync. Non incremental time messages also send most equipment into resync mode. 

The normal way to deal with this in broadcast equipment is to do a software phase lock loop of sorts. You basicaly have the time you are sending and the time you are receiving. If they drift apart you gradually speed up or slow down the code you are sending up to about +/- 2% or so.

Converting from a MIDI clock or beat message would be an extra step, but an LTC receiver cannot typicaly handle radical changes in clock speeds or instantly deal with large jumps in clock.

The other big problem is that, while LTC SMPTE can be wired like an audio signal, it's atypical. If you were to hook a SMPTE generator to a typical PC audio in, record it, and pay it back, it would sound like the painful warble to your ears, but most LTC readers would not lock to it. Between the inherent aliasing of sampling and all the filtering on a typical digital audio output, it is hard to get the required levels and wave shapes. This is why LTC is normally generated and read by a tiny MCU in hardware connected to the PC. In the cases where you use LTC, usually everyone would just spend the $200 and have a fully compliant signal instead of futzing and futzing to get gear to talk.

But good luck!

1 Like

Hi,

Thanks for the answer! I know it's going to be very hard to get a reliable "clock" but the problem is the device only sends network packets from where I can get the tempo/position. I'm going to send it to Resolume (video program), might be with internal audio routing. I think the program handles frame jumping pretty well (will do some tests with it now).

I know I'll have to work with some sort of "prediction" of the input packets for getting continuous SMPTE but that won't be that hard of a problem. So you think the MTC method might not be good, I might try the "normal" method with generating SMPTE with the audiosamplebuffer.

 

EDIT: Resolume handles frame skipping, reversing playback, random playback very well! So I don't think this won't be a problem if I can generate a good SMPTE signal.

So I'll approach it like this:

- On my getNextAudioBlock I'll see on what position the playhead and the tempo of the last recieved packet is and generate for that block the correct SMPTE if we assume the tempo will stay the same for that block. If I take 48kHz audio samples per second with a buffer of 480 this will be called 100x per second. SMPTE at 30 frames/s is 2400Hz data (30 x 80 bytes) with presumable one "frame" no sound for clock sync (it is self clocking) OR 2 samples per "bit" so in each audioblock i can fit 3 SMPTE frames. With this I'll have a 10ms delay (no problem) and maybe higher with threading and locking.

EDIT2: Ok, now I get that we can't choose the sample rate and buffer size. So extra calculating is necesarry. 

alfaleader, i just send you a pm.
We at Resolume are interested what you are trying to accomplish ;-)