Best way to find what's stalling MIDI/audio thread?

I have some audible jitter now and then in the MIDI output. I strongly suspect I have done something that stalls the MIDI output thread from launching at the right time. Is there any easy way to see which thread stole the proper launch time slot from the MIDI output thread?

I’m sending MIDI from getNextAudioBlock() using MidiOutput::sendBlockOfMessages().

Also if I am using couple of timers for various things, including to calculate some things for GUI and then call repaint() for certain Components, does that cause issues with these things?

So I guess the question boils down to: which threads can compete with MIDI output thread? The issue is audible on Windows platforms.

Tried using sendBlockOfMessagesNow() instead?

If you’re doing this from within getNextAudioBlock(), aren’t you imposing audio-buffer-based latency on the sending of MIDI data - i.e. shouldn’t MIDI and Audio be de-coupled in your design?

I wish to synchronize the audio and MIDI output calculations to each other and the easiest way to do it is to calculate them all in the getNextAudioBlock().

That being said, does the getNextAudioBlock always get called at exact time intervals? If it does, then I might just use that sendBlockOfMessagesNow() from the very first line of getNextAudioBlock and it should then work fine, I guess.

EDIT: Hmm, the documentation shows that sendBlockOfMessagesNow() seems to not have the samples per second for buffer parameter, which indicates that it might not send the MIDI data at the required times, but everything as fast as possible? The sample positions for the output MIDI timings are important.

I also think, sendNow does what’s on the tin, i.e. better use the sendBlockOfMessages you had initially.
I would expect the getNextAudioBlock() to be relatively steady on a normal system, but no guarantees when the system comes into stress. The worst case jitter is limited by the block size.

Some AudioIODevices support a AudioIODeviceCallbackContext, which has a nSecs counter. But This is not available on all platforms.
It also requires to write your bespoke AudioIODeviceCallbacks, which is not too hard though.

I tested if there was some jitter when getNextAudioBlock() gets called. I addedthe following few lines of code:

int old_time = 0.0;
int zero_counter = 0;

void AudioMIDISystem::getNextAudioBlock(const juce::AudioSourceChannelInfo& r_output_buffer)
{
    const int current_time = Time::getMillisecondCounter();
    const int delta_time = current_time - old_time;

    old_time = current_time;

    if (delta_time == 0)
        zero_counter++;
    ....

What I get is fairly regularly of 0 ms between some getNextAudioBlock() calls and 10-11 ms between others. All of them seem to be 256 samples long as they should be. But as I call getMillisecondCounter() for all of them, I obviously get jitter when I assume each processed block gets called on a fixed time interval. I think this is likely my issue or at least part of it.
I’ve no idea how I get those 0 ms blocks in between. I’ll check if it has anything to do with me having changed the output buffer size at some point.

EDIT:
Indeed it does. When I disabled my code which changes the sample buffer size, I don’t get those 0 ms blocks in between anymore… I’ll check if the jitter went away.

Here’s copy/paste from the audio initialization:

// Some platforms require permissions to open input channels so request that here
if (juce::RuntimePermissions::isRequired(juce::RuntimePermissions::recordAudio)
    && !juce::RuntimePermissions::isGranted(juce::RuntimePermissions::recordAudio))
{
    RuntimePermissions::request(RuntimePermissions::recordAudio,
        [&](bool granted) { setAudioChannels(granted ? 2 : 0, 2); });
}
else
{
    // Specify the number of input and output channels that we want to open
    setAudioChannels(2, 2);
}
 /*
 // If 256 sample buffer is available, reconfigure the audio buffer size to use that instead
 constexpr int PREFERRED_DEFAULT_BUFFER_SIZE = 256;
 auto          buffer_size_list = deviceManager.getCurrentAudioDevice()->getAvailableBufferSizes();

 if (buffer_size_list.contains(PREFERRED_DEFAULT_BUFFER_SIZE))
 {
     AudioDeviceManager::AudioDeviceSetup current_audio_setup;

     deviceManager.getAudioDeviceSetup(current_audio_setup);
     current_audio_setup.bufferSize = PREFERRED_DEFAULT_BUFFER_SIZE;

     deviceManager.setAudioDeviceSetup(current_audio_setup, true);
 }
 */

If take the quotes off from the lower part of the code, the sample buffer size goes to 256, but I get the 0ms anomalities between a lot of getNextAudioBlock() method calls. No idea why.

And quick test sounds like the jitter might be gone if I don’t change the buffer size.

I’ll try to rephrase my problem, so it’ll be easier to solve:

In a constructor of a class inherited from juce::AudioAppComponent I do the following (copy paste from tutorials)

    // Some platforms require permissions to open input channels so request that here
    if (juce::RuntimePermissions::isRequired (juce::RuntimePermissions::recordAudio)
        && ! juce::RuntimePermissions::isGranted (juce::RuntimePermissions::recordAudio))
    {
        RuntimePermissions::request(RuntimePermissions::recordAudio,
                                    [&] (bool granted) { setAudioChannels (granted ? 2 : 0, 2); });
    }
    else
    {
        // Specify the number of input and output channels that we want to open
        setAudioChannels(2, 2);
    }

Everything works correctly.
But if I add right after that piece of code the following lines, things break down:

        AudioDeviceManager::AudioDeviceSetup current_audio_setup;
        deviceManager.getAudioDeviceSetup(current_audio_setup);

        current_audio_setup.bufferSize = 256;

        deviceManager.setAudioDeviceSetup(current_audio_setup, true);

What starts happening is that getNextAudioBlock() will be called in irregular intervals.
Just like two different threads were calling it without knowing of each other.
Almost always every other call to getNextAudioBlock() has 0 ms from the previous one.
Each call has the 256 sample buffer length I setup above.

Does the above method of changing buffer size leave an old thread running and simply adds a new one instead of adjusting the old thread’s behavior properly? Or is something else at play here?

As far as I am able to tell, this happens on Windows and not on Mac.

Attached to this comment is the full test application JUCE project in a .ZIP file which demonstrates this issue:

  • Change audio buffer size.
  • getNextAudioBlock() starts getting called with 0ms intervals quite often.
  • This happens on Windows, not on Mac.

The application is less than 100 lines of code. You can see it also in the code block below.

The application itself does nothing more than initialises the audio as implemented by the template project + tutorials and then changes the default audio buffer size. Then it prints the total amount of 0 millisecond intervals the getNextAudioBlock() gets called.

If I change the audio buffer size to anything smaller than the system’s initial default value, I immediately start getting those 0ms blocks.

How to fix this? Please advice.

#include "MainComponent.h"

MainComponent::MainComponent()
{
    setSize (800, 600);

    initializeAudio();
}

MainComponent::~MainComponent()
{
    shutdownAudio();
}

void MainComponent::initializeAudio()
{
    // Some platforms require permissions to open input channels so request that here
    if (juce::RuntimePermissions::isRequired (juce::RuntimePermissions::recordAudio)
        && ! juce::RuntimePermissions::isGranted (juce::RuntimePermissions::recordAudio))
    {
        juce::RuntimePermissions::request(juce::RuntimePermissions::recordAudio,
                                          [&] (bool granted) { setAudioChannels (granted ? 2 : 0, 2); });
    }
    else
    {
        // Specify the number of input and output channels that we want to open
        setAudioChannels(2, 2);
    }
    
    // If requested sample buffer is available, reconfigure the audio buffer size to use that instead
    constexpr int PREFERRED_DEFAULT_BUFFER_SIZE = 256;
    auto          buffer_size_list = deviceManager.getCurrentAudioDevice()->getAvailableBufferSizes();

    for (auto x : buffer_size_list)
    {
        juce::String text("Supported buffer size: ");
        text << x;
        DBG(text);
    }

    if (buffer_size_list.contains(PREFERRED_DEFAULT_BUFFER_SIZE))
    {
        juce::String text("SWITCHING TO BUFFER SIZE: ");
        text << PREFERRED_DEFAULT_BUFFER_SIZE;
        DBG(text);

        juce::AudioDeviceManager::AudioDeviceSetup current_audio_setup;

        deviceManager.getAudioDeviceSetup(current_audio_setup);
        current_audio_setup.bufferSize = PREFERRED_DEFAULT_BUFFER_SIZE;
        //current_audio_setup.sampleRate = 44100;

        deviceManager.setAudioDeviceSetup(current_audio_setup, true);
    }
}

void MainComponent::prepareToPlay (int samplesPerBlockExpected, double sampleRate)
{
}

static int s_previous_time = 0;
static int s_counter       = 0;

void MainComponent::getNextAudioBlock (const juce::AudioSourceChannelInfo& bufferToFill)
{
    const int current_time_block_start  = juce::Time::getMillisecondCounter();
    const int delta_time                = current_time_block_start - s_previous_time;
    
    if (delta_time == 0)
    {
        juce::String text("0ms block count: ");
        text << s_counter;
        DBG(text);

        s_counter++;
    }
    
    s_previous_time = current_time_block_start;


    bufferToFill.clearActiveBufferRegion();
}

void MainComponent::releaseResources()
{
}

void MainComponent::paint (juce::Graphics& g)
{
    g.fillAll (getLookAndFeel().findColour (juce::ResizableWindow::backgroundColourId));
}

void MainComponent::resized()
{
}

Audio_Config_Test.zip (16.5 KB)

Sounds like it might be an audio driver issue. Which are you using?

I’m using the default ones that come with Windows. The audio interface is the standard audio outputs on my computer’s motherboard. I haven’t installed anything special there.

The same issue is with other users also at least judging from the reports I’ve received.

Do you get the same issue with the test application I posted above?

I ran your code, put together a quick jitter test while generating a sine wave, and had no audio jitter. The average time between frames was bang on the expected value.

You should use juce::Time::getMillisecondCounterHiRes() as it gives you much finer resolution.

The times between getNextAudioBlock are entirely up to the driver.

If you want to use sample times of 512 and below, I recommend selecting an ASIO driver on Windows for the best results. ASIO4All is a commonly used driver that works for most audio devices.

I have to use the lower resolution millisecond timer since I process/generate MIDI data which must be time stamped with that lower resolution timer output. That’s what was told to me on these forums since that’s what the MIDI output thread uses internally. When I previously used the high resolution timers, the MIDI output drifted several seconds apart in the worst case scenario. It all fixed itself when I used the lower resolution timer instead.

I need the MIDI and audio data to be synced together, but that weird 0 ms delay between some of the blocks breaks the whole system.

So the jitter is not audible in the audio blocks. Only in when the getNextAudioBlock is called. If it’s not called on constant intervals, it’ll be impossible to generate MIDI timestamps to the outgoing MIDI data, at least by using those means JUCE documentation tells us to use.

So if it’s entirely up to the OS’s driver when that method is called, what would be a proper way of calculating the MIDI timestamps (sample index locations) when generating MIDI data? The exact problem is that audio always is “synced” to each of the samples that are played back. You always know that “I’ve played X amount of samples and in the next frame I’ll do the same”. But MIDI data output requires the timestamps / sample locations and there’s incoming MIDI data and outgoing MIDI data, each output method call requiring that millisecond value from that Time::getMillisecondCounter()…

Your timing should be based on the current sample position, not system time.

std::atomic<int64_t> sampleTime = 0;

void MainComponent::getNextAudioBlock (const juce::AudioSourceChannelInfo& bufferToFill)
{
    auto songPositionInMilliseconds = (sampleTime / sampleRate) * 1000.0;

    sampleTime += bufferToFill.numSamples;

    bufferToFill.clearActiveBufferRegion();
}

To explain this the other way, the problematic parameter is in bold in the below code:

void MidiOutput::sendBlockOfMessages (const MidiBuffer& buffer, double millisecondCounterToStartAt,, double samplesPerSecondForBuffer)

MidiOutput::sendBlockOfMessages() needs to know the time (in milliseconds) when the MIDI data should be sent out. JUCE internally uses Time::getMillisecondCounter() to compare if that time has been reached and the data should be sent out to external hardware.

The high resolution version of the millisecond counter can drift several seconds apart from the low resolution version. The drift gets worse the longer the computer is on. This means the high res version cannot be used for this purpose.

Naturally the MIDI output send time would should calculated from the exact time getNextAudioBlock() gets called, if it was called in regular intervals. This would allow to easily sync the output and input MIDI events with the played audio samples.

If getNextAudioBlock() isn’t called in regular intervals, none of those timer based calculations work anymore. If one 256 sample block gets called immediately after the previous one has been processed, that usually gives 0 ms between those two audio blocks, which in turn break the MIDI timing. The audio timing will be OK as it’s not based on any of the timer’s output.

Low-level audio code like this isn’t supposed to adhere to real-time limits other than completing (hopefully) in a specific time frame. So the drift you’re seeing is the driver scheduling calls that are out of sync with a ‘real-time’ clock, not the other way around.

Time::getMillisecondCounter in MidiOutput is just used to determine if the timeout period has elapsed; other than that, it doesn’t do any real-time synchronisation. So you can’t even guarantee that the MidiOutput thread is called in any reasonable time frame.

You need to implement some form of clock synchronisation to do this properly. How you implement this is situation dependant, but you cannot rely on things like driver frequency; it won’t work.

Hmm, it would be nice if JUCE had some guaranteed to work timer/method which could be used for these kinds of purposes: keeping incoming and generated MIDI in sync with audio.

The best option would be to just output the MIDI events with sample index in them as they already have that info and JUCE would simply try to output them as close to that time as possible. That would be the intuitive and natural way for the end users of JUCE.

I wonder if these kinds of issues are the exact reason why so many audio software companies highly recommend the endusers to use ASIO drivers instead of the regular windows drivers? So it’s not just the lower latency, but possibly these kinds of weird irregularities in the incoming audio buffer fill calls?

I have to test if installing ASIO gets rid of the issue completely. If it does, I’ll also recommend using ASIO in all cases.

On another thread I asked about MidiOutput::sendBlockOfMessages and the value to use for millisecondCounterToStartAt (Question: MidiOutput::sendBlockOfMessages and time value). But this thread has confused me even more.

But it does affect the timing of the outgoing midi messages. In that sense it’s relevant to the question whether one should set millisecondCounterToStartAt using Time::getMillisecondCounter() or Time::getMillisecondCounterHiRes().

It seems that on Windows things are quite complicated, but on both Android and Mac Time::getMillisecondCounter() and Time::getMillisecondCounterHiRes() are obtained internally in similar ways (by calls to mach_absolute_time() on Mac, and clock_gettime (CLOCK_MONOTONIC, &t) on Android). So, in terms of precision, it seems to me that it shouldn’t really make a difference which is used, at least on those platforms. Am I right?

The problem I see though is that Time::getMillisecondCounter() is a 32-bit return value and will wrap back to 0 after 2^32 milliseconds of uptime. These are about 49 days. Since internally MidiOutput uses this low-resolution timer, shouldn’t we have to set millisecondCounterToStartAt using the same timer not to run into a possible discrepancy? 49 days of uptime doesn’t seem very unprobable!

The documentation of MidiOutput::sendBlockOfMessages seems to suggest using the low-res timer. But then again, there is an example within the Juce code itself where the hi-res timer is used, in juce_AudioProcessorPlayer, line 332:

midiOutput->sendBlockOfMessages (incomingMidi, Time::getMillisecondCounterHiRes(), sampleRate);

It would be great if someone could clear up these questions!

1 Like

The timing method is only used to calculate relative periods; it doesn’t matter if the clock does roll over unless you’re unlucky enough to queue a message before the rollover occurs, and the dispatch happens afterwards. It should probably be updated to use the higher resolution timer.

I would (and do) use the high-resolution timers everywhere I need to measure time, but that still doesn’t guarantee your Midi output will be 100% accurate because of drift.

DAWs get around this by supplying a master clock; everyone bases their timing off said clock, and everyone (plugins) stay in sync.

If you’re trying to sync with external sources, you will need a more robust method of detecting drift and/or syncing/providing your clock source. Some DAWs have some ability to do this.

TL;DR: Computers aren’t very good at keeping time, even quick ones.

To answer your question directly, You should use the lower resolution timer to make it compatible with that method as it shares the same base.

Perfect, that clears all my doubts. Thanks for the quick answer!