Transport control with Link

I’m trying out Link integration in a tracktion_engine based app. I wrote a quick macos app with a 4-note midi sequence being played in loop on launch. I’m testing with Ableton Live running locally with Link enabled, just by listening to the result.

Although I’m able to compile, run, and join a Link session, the resulting audio sometimes sounds like its lagging compared to the metronome in Live. Sometimes the two are in sync, but there’s often a noticeable lag.

In my test app, I’ve simply implemented the listeners’s methods this way:

 void linkRequestedTempoChange (double newBpm) override
 {
     edit.tempoSequence.getTempos()[0]->setBpm(newBpm);
 }
void linkRequestedPositionChange (double adjustmentInBeats) override
{
     auto lastTimelinePosition = edit.getCurrentPlaybackContext()->playhead.getPosition();
     edit.getCurrentPlaybackContext()->playhead.setPosition(lastTimelinePosition + adjustmentInBeats);
}

Does that look like a reasonable implementation to you?

1 Like

The argument to linkRequestedPositionChange is adjustmentInBeats. Your lastTimelinePosition is a time in seconds so maybe you need to convert that to beats using the Edit’s TempoSequence first, then add then adjustment and then convert that back to seconds before setting the playhead position?

1 Like

Raaaa, stupid me… of course, however, still no luck after changing to:

double getCurrentPositionSeconds() const
{
if (auto* playhead = transport.getCurrentPlayhead())
    return playhead->getPosition();

return transport.getCurrentPosition();
}

 void linkRequestedTempoChange (double newBpm) override
{
edit.tempoSequence.getTempos()[0]->setBpm(newBpm);
}
 void linkRequestedPositionChange (double adjustmentInBeats) override
 {
    double currentPosBeats = transport.edit.tempoSequence.timeToBeats (getCurrentPositionSeconds());
    currentPosBeats += adjustmentInBeats;
    edit.getCurrentPlaybackContext()->playhead.setPosition( transport.edit.tempoSequence.beatsToTime(currentPosBeats));
 }

Anyone had some luck with this?

How far out of sync does it go? And does it drift or just start out of sync?

I’d maybe look in to AbletonLink::ImplBase and see if there’s anything that jumps out there?

It just starts out of sync (or syncs out after a tempo change) but doesn’t drift. It goes out of sync a fraction of a beat, I haven’t made measurements, but it’s never above/under about a half beat I would say.

Is Link supported in Waveform?

Unfortunately not, hence why I don’t really have a place to test this.
This was added by an Engine user a while ago and both Link and the engine may have changed in that time.

I’m also a bit snowed under to be able spend time looking in to this too much but if you want to do some digging I’ll offer pointers where I can. It’s also doubly difficult with two playback engines at the moment as setting the playhead works slightly differently in each one. I’d prefer to stick to only the tracktion_graph implementation for now as I’ll be removing the old one over the next couple of months.

At the moment I’m unsure if it’s Link that’s providing incorrect jump values or that the engine isn’t jumping to the correct place?

2 Likes

I see. The first thing that looks fishy to me in AbletonLink::ImplBase is that it’s making calls to Link’s audio thread methods from a timerCallback. For instance:

double getBarPhase (double quantum) override
{
     return link.captureAudioSessionState().phaseAtTime (clock.micros(), quantum);
}

although, from Link.hpp:

  /*! @brief Capture the current Link Session State from the audio thread.
   *  Thread-safe: no
   *  Realtime-safe: yes
   *
   *  @discussion This method should ONLY be called in the audio thread
   *  and must not be accessed from any other threads. The returned
   *  object stores a snapshot of the current Link Session State, so it
   *  should be captured and used in a local scope. Storing the
   *  Session State for later use in a different context is not advised
   *  because it will provide an outdated view.
   */
  SessionState captureAudioSessionState() const;

Since the whole class relies on this approach, it feels quite pointless to me to try and fix it (unless someone else can tell me they had good results with this code).

I have tested Ian Caburian’s (@SQUARESEQUENCE ) Juce implementation and found it to work very well in a pure JUCE app. It works by polling the Link API from the audio thread.

In the context of TE, I’m not sure how to do this. My current idea is to create a Tracktion plugin, poll the link session from its audio callback, and adjust the transport from here if needed. Does that sound reasonable to you, or would you recommend another audio thread entry point in the engine that would be better suited?

For what it’s worth I had a similar experience as yourself when I was playing around with the Tracktion implementation. Unless I am also wrong, I came to the same conclusion that the way Tracktion calls audio methods from the message thread is the cause of the incorrrect sync.

A quick fix would be for Tracktion to substitute their calls to captureAudioSessionState() with captureAppSessionState(). However, I’ve found captureAppSessionState() to be jittery for syncing since Link uses a ringbuffer get beat info between threads. captureAppSessionState should really only be used for UI purposes. This may be unnoticeable though depending on the target platform.

A better but more intrusive fix would be to have a dedicated listener on the audio thread that tracks Link bpm/beat info then allow the user the freedom as to how to get that info from the audio thread (e.g. message thread polling, async call, or ringbuffer), depending on how precise vs performant they need their sync.

1 Like

Just having a quick look at this. It seems to me that the content of the timerCallback should really be a public method of the AbletonLink class and then this should be called during the EditPlaybackContext::fillNextAudioBlock/fillNextNodeBlock method.

This would also mean any position changes will get dispatched during the current audio callback so should be accurate.

Then we’d probably need to check that the rest of the code in the timerCallback is actually thread safe and any callbacks it calls are either safe to call on the audio thread or are dispatched safely to the message thread. I can see a few things that aren’t thread safe in there at the moment like the access in to the TempoSequence, but there are thread-safe versions of this (TempoSequencePosition).

It needs a bit of refactoring but I think that’s probably how it should be done.

I have made a bit of progress with this issue thanks to your pointers. I avoided the refactoring way for now, as I prefer first making sure I’m able to actually do the syncing, relying on working code from Ian. Here is the approach I implemented:

  • I added an AbletonLinkTransport class wich essentially keeps Link’s hostTimeFilter updated, and queries the quantum phase with the correct audio-thread methods. This part of the code is based on @SQUARESEQUENCE 's code, which I’ve tested in a pure JUCE app and slightly adapted so that it uses the DeviceManager’s latency reporting methods. I’m pretty confident this part works fine.
  • I’m calling AbletonLinkTransport from EditPlaybackContext::fillNextAudioBlock to query the quantum phase. Here, I’m reusing a bit of the logic from tracktion_AbletonLink, to compute the offset between the local beat and the beat from Link.
  • I’m then using this offset to adjust the playhead’s position if the offset is above a given threshold. From what I can hear, the resulting timing sounds good, but the position adjustment keeps happening (creating audio artifacts that sound like some sort of noteOn retriggers), so I suspect the there’s some kind of inaccuracy in the way I reposition the playhead, making it continuously go out of phase with Link.

Below is what the modified EditPlaybackContext::fillNextAudioBlock looks like. When updating the playhead and the current bpm, I probably should be using TempoSequencePosition instead of tempoSequence as you were mentioning in your previous answer, but I can’t figure out how. Would you have any improvement pointers to that part of the code?

void EditPlaybackContext::fillNextAudioBlock (EditTimeRange streamTime, float** allChannels, int numSamples)
{
    CRASH_TRACER

    if (edit.isRendering())
        return;

    SCOPED_REALTIME_CHECK

    // update stream time for track inputs
    for (auto in : midiInputs)
        if (in->owner.getDeviceType() == InputDevice::trackMidiDevice)
            in->owner.masterTimeUpdate (streamTime.getStart());

    midiDispatcher.masterTimeUpdate (playhead, streamTime.getStart());

    playhead.deviceManagerPositionUpdate (streamTime.getStart(), streamTime.getEnd());

    // sync this playback context with a master context
    if (contextToSyncTo != nullptr && playhead.isPlaying())
    {
       ...
    }

    // adjust playhead to link
    const double linkBeatPhase = abletonLinkTransport.update();
    const double currentPosBeats = transport.edit.tempoSequence.timeToBeats (playhead.getPosition());
    const double localBeatPhase = negativeAwareFmod (currentPosBeats, 1.) ;
    double offset = (linkBeatPhase - localBeatPhase);
    if (std::abs (offset) >  0.5)
        offset = offset > 0 ? offset - 1.0 : 1.0 + offset;

    linkTimeSinceLastPlayheadUpdate += streamTime.getLength();
    if (std::abs (offset) > 0.01 && (linkPlayheadUpdateInterval < linkTimeSinceLastPlayheadUpdate))
    {
        playhead.setPosition(transport.edit.tempoSequence.beatsToTime(currentPosBeats + offset));
        linkTimeSinceLastPlayheadUpdate = 0;
    }

    // update local bpm
    auto localBpm = transport.edit.tempoSequence.getTempos()[0]->getBpm();
    auto linkBpm = abletonLinkTransport.getBpm();
    if (localBpm != linkBpm)
    {
        juce::MessageManager::getInstance()->callAsync ([this, linkBpm] () {
            transport.edit.tempoSequence.getTempos()[0]->setBpm(linkBpm);
        });
    }

    edit.updateModifierTimers (playhead, streamTime, numSamples);
    midiDispatcher.nextBlockStarted (playhead, streamTime, numSamples);

    for (auto r : edit.getRackList().getTypes())
        r->newBlockStarted();

    for (auto wo : waveOutputs)
        wo->fillNextAudioBlock (playhead, streamTime, allChannels, numSamples);
}
2 Likes

These noteOn triggers are to be expected on the first few calls of the hostTimeFilter.

If you peek into the internals - the hostTimeFilter is a basic linear regression that attempts to align the sample position of the client’s playhead to the global host time that is being maintained by all Link peers. Therefore, linear regression requires a few hundred sample points at first before it begins to output precise values.

This is why the Link documentation suggests that you begin running the host time filter as soon as you start your app - which is likely when the audio callback runs for a few seconds before the user hits the play button. Similarly, to maintain smooth syncing, this is why you should run the host time filter continuously for each and every audio callback regardless of whether Link is enabled or whether the internal play head is playing or not. By “running” the host time filter i mean, calling its sampleTimeToHostTime() method.

Further, if you inspect the beat values you receive from link and compare them to raw beat conversions of your internal sample position, you will find that occasionally (i.e. noticeably) Link WILL feed you some buffer overlaps or buffer gaps that result in so-called noteOn retriggers, or even skipping some expected notes altogether. You will have to account for this as a simple reality of the protocol.

Personally the way I do it is I keep track of two beat ranges: the current and previous. The current beat range is what I will pass down the audio chain to my sequencer/playhead/audio transport. The previous range is simply what the current range was in the previous audio callback.
To calculate the current range, I assign the previous range’s end value to the current start - this obviously forces the beat transition between audio callbacks to be precisely continuous, i.e. no gaps or overlaps. Then the current range’s end is calculated to be the Link session’s beat (calculated via session.beatAtTime()) + the size of the current audio callback buffer (in beats).

The result is that whenever Link gives you these buffer overlaps or gaps, your current beat range that you feed to the rest of your chain either shrinks or expands relative to the actual range that Link feeds you so that you may not strictly speaking “start” in sync, but you will always manage to “end” in sync and in a smooth manner. In practice, these variable ranges cannot be perceived, and the sync manages to “sound” even tighter when you do extreme/crazy fast adjustments to the bpm from any connected peer.

@SQUARESEQUENCE thank you for your detailed feedback.

this is why you should run the host time filter continuously for each and every audio callback regardless of whether Link is enabled or whether the internal play head is playing or not. By “running” the host time filter i mean, calling its sampleTimeToHostTime() method.

Yes, this is what I’m doing under the hood when calling abletonLinkTransport.update() in the audio callback. I will take a look at the beat ranges I get from Link and see if I can use your method.

Assuming I get the correct offset to adjust the playhead, @dave96 does the above code to set the playhead position look right to you?

Ok I think I’ve just made progress on this issue for anyone interested In the above code, I replaced :

playhead.setPosition(transport.edit.tempoSequence.beatsToTime(currentPosBeats + offset));

with:

auto newPosition = transport.edit.tempoSequence.beatsToTime(currentPosBeats + offset);
playhead.setRollInToLoop (newPosition);

Sorry everyone, I’ve been snowed under the last couple of weeks.

The only thing setRollInToLoop does is to enable you to set a position before the loop start time if loop mode is enabled. Is the problem you’re seeing that the position you’re trying to set is before the loop in time?

If I were you I’d probably encapsulate the link logic in to a function or a class so it’s not all in the update code (I’ve done this for the context syncing in the new engine version of these methods but maybe on a branch). It’s a bit easier to see what’s going on then.


I know you’re just trying to get this working but you also can’t capture a raw this in callAsync as the object may be deleted by the time the callback happens. You’ll have to capture a Edit::WeakRef instead.

1 Like

No worries for the delay. My test app seems to work like a charm now. For sure there’s some refactoring to be done, now that it’s working. Let me know if you want to check out my Tracktion fork in case you’re interested in refactoring the AbletonLink class.

2 Likes

I will do at some point. Thanks for your work!

2 Likes