Convert juce::Time values to EditPlaybackContext "streamTime"

I would like to receive messages on the network thread, and know (as precisely as possible) when they arrived, relative to a playing te::Edit. My plan is as follows:

  1. When a message is received on the network thread, time stamp it with Time::getHighResolutionTicks or Time::getMillisecondCounterHiRes.
  2. Use an AbsstractFifo implementation to funnel the messages to the Message Thread where they can be used to annotate the Edit.
  3. Calculate the EditPlaybackContext streamTime from the highResolutionTicks, and then convert that to “Edit Time” (assuming the edit is playing)

Does this seem like a sensible approach?

Is there a way to convert juce HighResolutionsTicks or the value returned by getMillisecondCounterHiRes to stream time? I have not been able to find the precise time that edit streamTime begins increasing monotonically.

Thanks for any help!

Are you simply recording the notes in to a MIDI clip?
If so, probably the simplest thing to do is create a VirtualMidiInputDevice (DeviceManager::createVirtualMidiDevice) and call its handleIncomingMidiMessage (const MidiMessage& m) method.

If you leave the timestamp as 0 it will get stamped to the system time which I think means it will line up with the Edit time when recorded to a clip.

Thanks @dave96.
I’m not receiving OSC, not MIDI messages, because I need floats for my use case, so I don’t think that VirtualMidiInputDevice will work.

If the AudioRenderContext's “streamTime” corresponds to system time as reported by juce::Time methods then I can use that. I’ll look into that now.

Okay it looks like the streamTime solution wont work. The audioRenderContext.streamTime.start/stop values start at zero when the render context is created, and count upwards in seconds from there.

I don’t know when (relative to Time::getMillisecondCounterHiRes()) the streamTime starts counting.

Is it not possible to convert a time value retrieved with Time::getMillisecondCounterHiRes() to a time relative to the AudioRenderContext's streamTime?

I’m a little bit fuzzy about exactly what you’re trying to do. Are you recording this live? Are you creating MidiMessages to pass on to the Edit or just want a series of Edit time stamps for an external record?

Is it not possible to just get the current playback time from the PlayHead contained in the current EditPlaybackContext?

You could manually sync up the system time with the DeviceManager’s stream time (which gets passed on to the audio node render callbacks via the AudioRenderContext) but bear in mind that this can reset and change when the AudioDeviceManager changes so might not be the best way.

I think I’d need to know a little more about the specifics to say how I’d probably approach this.

I’m not using MIDI at all – just OSC. I am playing an Edit, and I want to record incoming OSC messages live and in time with the edit.

The OSC messages arrive and are handled in a callback on the Network thread (via OSCReceiver::addListener ( Listener< RealtimeCallback >* listener )), so I cannot access the edit’s members such as the Transport, Playhead, EditPlaybackContext, etc. This is why I’m currently using the juce::Time methods to timestamp incoming messages.

It would be possible to handle the incoming OSC messages on the Message Thread, but I want to know when an OSC message arrived, which may be different than when the handler callback runs on the message thread.

Does that make sense? I think I need either an atomic edit playback time that can be safely read from the Network thread, OR I need to know the exact juce::Time that edit Playback began – which would enable me to calculate the edit-relative time of the incoming OSC messages from its timestamp.

Please correct me if I’m wrong, but my current understanding is that any other method would limit the timing resolution to the size of the audio block.

Thank you for looking at this with me!

Ok, this is a little convoluted as this is kind of outside of the Engine playback graph but I think you can achieve what you need by having a dummy MIDI device and syncing to that.


For MIDI, messages get synced in the following way:

We have this which syncs the system time with the DeviceManager::streamTime:

void MidiInputDevice::masterTimeUpdate (double time)
{
    adjustSecs = time - Time::getMillisecondCounterHiRes() * 0.001;

    const ScopedLock sl (instanceLock);

    for (auto instance : instances)
        instance->masterTimeUpdate (time);
}

then this makes sense when you receive a note:

if (m.getTimeStamp() == 0 || (! engine.getEngineBehaviour().isMidiDriverUsedForIncommingMessageTiming()))
    message.setTimeStamp (Time::getMillisecondCounterHiRes() * 0.001);

message.addToTimeStamp (adjustSecs);

so ultimately you can do this:

    recorded.addEvent (MidiMessage (message, context.playhead.streamTimeToSourceTimeUnlooped (message.getTimeStamp())));

So if you have a MIDI input (even a virtual one, it doesn’t have to be assigned to a track I don’t think), you can simply use call MidiInput::getAdjustSecs() add that to the current system timestamp of your messages, and that will enable you to sync that time with the Edit time using playhead.streamTimeToSourceTimeUnlooped (timeStamp)));

We’re thinking about moving this timestamp and syncing functionality out of the MIDI devices but I’m not sure when we’ll get around to that so the above is probably your best bet for accuracy at the moment.

I hope that helps!

Thank you! This makes sense, and is what I needed to know.

It is somewhat difficult to access adjustSecs here’s why:

  • MidiInputDevice::adjustSecs is a non-atomic double. It can be accessed with the MidiInputDevice::getAdjustSecs() method, but only from the “Built-in Output” thread where it is updated.
  • DeviceManager::adjustSecs is an atomic double, but it is a private member, and there is no getter method.

It would be nice if either

  • there was a getter method on the Device Manager OR
  • MidiInputDevice::adjustSecs was atomic.

Would you consider either of those? My current workaround involves subclassing VirtualMidiInputDevice, overriding the masterTimeUpdate method, and exposing an atomic double that way. However, it is not the most robust thing because of how i/o devices are automatically saved and loaded in the properties file – the engine is not really made to handle input device types that are not in the InputDevice::DeviceType enum.

Ok, I can make MidiInputDevice::adjustSecs atomic but can I double check that this actually works for your purposes first?

Although technically UB, on x86 this will be atomic, the instructions generated are basically the same as an atomic with a std::memory_order_relaxed argument.

Yes, that should work for me. Access to adjustSecs gets me the information I need to sync incoming messages with a playing edit.

One thing that would be nice to know: Will te::Engine::getDeviceManager() always return the same pointer? Or are there circumstances in which an Engine's device manager may be replaced/reinitialized at runtime after the first initialization?

It looks like the Input Devices stored in the Device Manager will only be added/removed from the message thread - So I think even as an atomic value, technically the adjustSecs will only really be safe to access on the Message Thread (we don’t want to the underlying device to be deleted before a call to its getAdjustSecs method returns).

For my purposes that should work fine though. Thanks for looking into to this!

Yes, at least at the moment this is valid for the lifetime of the Engine object.

Yes, that’s true…
I think the “correct” way would for us to do as I mentioned earlier and encapsulate the stream syncing in to some subscriber type interface.

1 Like

Yes, I agree it would be prettier to consume via subscriber interface.

This could help with precision in some cases too, for example, when the adjustSecs value is slightly out of date.

I’m aiming for precision within 1ms, and my testing so far shows that access to an atomic adjustSecs in the Device manager works fine for that.