Latency between adding and playing a midi note

On the message thread, I am trying to schedule a note on a MidiClip to play “now”, or as close to “now” as is possible. But I’m finding that if I schedule the note for anything less than about 350 milliseconds into the future, it gets skipped by the playhead.

To demonstrate, I created a small timer which adds notes in at x miliseconds in the future:

struct ClipTimer : public Timer
{
    ClipTimer(te::MidiClip& c, te::Edit& e, te::TransportControl& t) : clip(c), edit(e), transport(t) {}

    virtual void timerCallback() override
    {
        const auto beat = edit.tempoSequence.timeToBeats(transport.getCurrentPosition() + delay);

        clip.getSequence().addNote(55, beat, 0.5, 127, 0, nullptr);
    }
    te::MidiClip& clip;
    te::Edit& edit;
    te::TransportControl& transport;

    double delay = 0.3;
};

On my machine (in a debug build), when delay is anything less than 0.3 seconds, most of the notes go missing. Even at 0.3, some still do.

Can someone suggest the best way of approaching this problem? Is it possible to calculate the latency and compensate for it somehow? Or can I configure the plugin so that it doesn’t skip notes that are triggered at short latency?

What are you actually trying to do?

Trying to make changes to bits of the edit that are about to play is really abusing the way things are designed to work. If you need to hear real-time events, then you should just pump them directly into a midi input which gets routed through the edit however you want.

Maybe what you really want to do is to just put it into record mode and record your incoming midi to a clip while also previewing it live?

Tracktion seems to work in the paradigm of a moving playhead over a stationary sequence of events. What I am trying to do is work on a paradigm where the events are relative to a stationary playhead. In other words, instead of saying “play this note at t = 10 seconds”, I want to say, “play this note 3 seconds in the future”. This means that I need to convert the time relative time to absolute time, by doing something like edit.tempoSequence.timeToBeats(transport.getCurrentPosition() + delay);.

It works perfectly well, until the delays get small. Then I’m hitting the point where the MidiClip doesn’t see the note any more.

(Note that this all stems from a previous discussion, where you had suggested using Tracktion to handle the timing for my sequencer).

The “previewing live” option sounds interesting. Can you say a bit more about how this would work?

There isn’t really much sense in an engine that works in terms of “the future” because for that, you’d need to define “now”, which is harder than it sounds. If you’re just playing a single, isolated event “3 seconds in the future” then fine, you could build an engine that does that, within a margin of time error. But if you expect to have more than one thing playing, and to be able to define the time relationship between all those things, then you need a timeline. And if you have a timeline, it doesn’t make any difference whether you say the clips are stationary and the playhead is moving, or vice-versa, the end result is identical.

So I think maybe you need to think a bit harder about exactly what you’re trying to achieve.

All I mean is exactly what happens when you plug a midi keyboard in, point it at a track with e.g. a synth on it, and hit record. The events are played live with zero delay, but a recording is made so you can loop back over it and the notes are there.

But you’ve still been really vague about what you’re actually trying to build, so I can’t really say if this is what you need or not.

I will try to be more specific.

I am building a special type of sequencer–let’s call it a “non-linear sequencer”. Instead of each event happening one after another on a predefined timeline, events are determined by a simple domain specific language. So, imagine a line of cells on the screen, where each cell has a series of commands that the user has typed into it. The first cell has commands that say “wait 1 second, then play G#, then start the cell to the right”. The next cell has commands that say “wait 0.2 seconds, then play A, then start the cell to the left”.

So the sequencer is not built around a linear timeline per se–instead it interprets commands like “wait”, “play” and “start” (on the message thread). But of course, it still has the job of translating these events into linear time (on the audio thread). This can only happen one event at a time–when the first cell starts, it can schedule the its events for 1 second in the future, but it can’t go any further, because it doesn’t know what the second cell will do.

All of this is working reasonably well, but it is breaking down when you try to schedule events with a small delay. So for instance, if I tried to get the second cell to wait for 0.2 seconds and then play the note A, the engine will typically lose the note. “wait 0” will be impossible here, but I was hoping that the latency would be a lot less than 300ms.

Does this make sense?

But if you have a program that generates the midi sequence, and if changing the program means that the entire sequence needs to be rebuilt, why would there be any reason not to generate more than a second into the future? It sounds to me like that’d work fine. Kind of like in waveform, you can play around with our pattern generator clips while the timeline’s running.

Of course if you’re building some kind of weird non-deterministic system that just doesn’t fit into the model of a timeline-based playback system, then there’s probably no ready-made engine that would suit you, you might need to build your own, (but that’d be a herculean effort!)

There will be non-deterministic elements in the sequencing… but I see your point. It sounds like it would probably be easier calculate a few steps into the future than it would be to proceed with my original plan. I’ll put some thought into what this approach might look like. Thanks for taking the time to help me think through it.

The approach that Jules suggested has been working, and I have made a lot of progress over the past few months. My sequencer can now scan into the future and schedule events on the MidiClip. However, I am still left with a problem that relates to the original question. If a note is scheduled only very slightly into the future (say 0.2 seconds), then it likely won’t be played by the Edit. This can happen in several ways in my application, for instance, when the user first clicks on a cell to start the playback. It’s a bit problematic, and I would like to fix it. What I would like to do would be to catch the notes that the Edit has missed and then process them directly, a bit like this:

 if (midiNoteIsInPast() && midiNoteDidntPlay())
     injectLiveMidiMessage();

Figuring out which notes are in the past is easy, but figuring out which notes didn’t play seems difficult. I could do it if there is a way of getting a callback every time a note does play–then I could work out which notes didn’t play by elimination. So my first question is: is there any way of getting a callback notification when the Edit plays a MidiNote?

If this approach doesn’t work, then another thing would be to identify the too-soon-to-be-noticed notes before they have been scheduled:

if (scheduledTime < thresholdTime)
      injectLiveMidiMessageAfterDelay(scheduledTime);
else
      scheduleEventOnMidiClip();

This might be easier, but I don’t know how to calculate the thresholdTime reliably. On my system, events scheduled 0.3 seconds into the future tend to get played, but not always, and I am guessing that it will vary from system to system.

To summarize, I am looking for a way to handle notes that are scheduled too soon for the edit to handle. Do either of the strategies I have suggested seem plausible? If not, is there another approach that I could take?

I’m still a little unclear about the use case here so it’s hard to give advise. You usually have “live play” which you want to hear immediately and then “recorded play” which would be your scheduled events.
If your scheduled events are so close to your live events that this is unclear, I’m not sure what exact semantics would work for your use case.

However, if you follow this thread (Show Midi Data As It Is Recorded) you’ll see mention of AudioTrack::Listener and it’s recordedMidiMessageSentToPlugins callback. You might be able to use this to determine what actually has been sent to the plugin.

You won’t be able to reliably determine a thresholdTime as it will depend heavily on the resources of the target machine and the complexity of the Edit being played back.

How are you working out when notes should be played back live and when notes should be scheduled and how far in to the future? Do you want these scheduled events to always be played back e.g. if the user starts playback from the beginning again?

It’s a little unusual, I know, and I’m probably not good at describing it. Basically, the scheduling is not laid out linearly as in a normal DAW, but is determined by a domain specific language. You enter text commands into a series of cells (wait, play, etc), and then when you click on a cell, it interprets the commands and schedules the right notes. So it’s the “click” that is live, and the ensuing text commands that are predetermined, in a sense. I know that Tracktion isn’t designed for this kind of thing, but Jules and others on this forum have convinced me that porting it Tracktion would still be easier than any other option. The prototype is working quite well–it’s just that initial “click” that often gets missed.

AudioTrack::Listener is exactly what I was looking for to test out my first idea–thanks for pointing it out to me. I see that it gives you a juce::MidiMessage–is there any way of comparing that to tracktion_engine::MidiNote? I.e. some equivalent to MidiMessage == MidiNote?

Yeah, I don’t think it would be any easier if you rolled your own, you’d have the same problems to deal with (and way more besides).

Can’t you just live play the “click” note and then play back the generated content?


There’s no way to directly compare a juce::MidiMessage and te::MidiNote as they’re intrinsically different things (Notes have start/length times for a start). It really depends on how complex your generated sequences are. You might get away with just comparing MIDI note number and channel number for note-on events?

I thought about playing the “click” note live, but I don’t think it would quite solve the problem, as there’s a chance that it will schedule another note at a very small interval, and we’d be back having to define the threshold again.

Using the AudioTrack::Listener seems promising–I’ll play around the the MidiMessage callback and see if I can make it work. Thanks again for your help.

Any attempt to schedule real-time events on the message thread is bound to fail. This is because there exists a non-deterministic time delay when passing information to the processing thread. In plain english - it takes a small random time for the event to reach the processor. You are going to miss events, but unpredictably and intermittently. Very frustrating and hard to debug.

"The first cell has commands that say “wait 1 second, then play G#, then start the cell to the right”. The next cell has commands that say “wait 0.2 seconds, then play A, then start the cell to the left”.

The (IMHO) correct way to do this: Pass the commands to the processor thread, have the processor action the commands in the real-time thread. This will work deterministically right down to very fine precision. And everything will be solid and predictable.

1 Like

This sounds promising! I’ve already got a thread dedicated to parsing the commands outside the message thread, so the setup is already there. How do I gain access to the processor thread?

Wait, do you mean doing it on the audio thread? If so, I don’t think that it would work, because the parsing is a pretty heavy operation (heavy enough for me to want to get it off the message thread). But if there is a third existing thread that deals with scheduling, it could work.

If the parsing is very heavy, it may pay to kind of 'pre-process" the command down to a easy-to-handle format before auctioning it in the processor. For example an enum (integer) to represent the type of command rather than a string. Hope that makes sense.

It does make sense, and it might well be possible. Just to be clear though–are you talking about using the audio thread to schedule the pre-processed commands? Or is it some other thread?

It would have to be the audio thread or you’ll end up with the same problem as before.

If you need access to the audio thread you should write a Plugin subclass then you can add that to a track.

Thanks for the clarification. You’ve warned me against doing this on the audio thread in the past, so I think I’ll stick to plan A for now. I feel like I’m getting close to a solution that will work tolerably well using the AudioTrack::Listener to catch notes that have been dropped. If that doesn’t work, I’ll think about the audio thread again.

Yeah, where you should do this really depends on some fine details about scheduling times, where the data is coming from and if it should be played back in the future etc.

If you think about a MIDI arp plugin that schedules notes in the future and it actually sounds like what you’re doing is similar to this. However, there does also seems to be elements of simply creating “MIDI clip content” based on user interaction which favours the clip approach.