Creating a sample accurate metronome / clock using HighResolutionTimer and getNextAudioBlock callbacks

Hi all,

I’m trying to sample a data stream (let’s say a slider’s output) that requires sampling at a varying frequency between 500Hz and 5000Hz.

The only way I can seem to do this is by using a conventional audio stream as the clock and simply omitting the values I don’t want. The amount of samples I omit varies based on how frequently I want to sample the data stream at any given point.

So I’ve been trying to create an audio clock that can achieve this and have just finished watching The Audio Programmer’s very helpful tutorial on creating a metronome. It’s exactly what I was after with one small caveat: the clock isn’t sample accurate and its precision lands anywhere within the user’s buffer size, for me: 480 samples, or 10ms (given a sample rate of 48000Hz).

So, in an attempt to overcome this, I’ve implemented a HighResolutionTimer and set the callback interval to 1 / sampleRate that counts up to each buffer callback / audio block. This should, in theory call back for each new sample between the buffer call backs, right?

It doesn’t. The interval between HighResolutionTimer callbacks is something like 2% as fast as it needs to be.

Here’s the output from my console:

Sample Rate : 48000
Samples Per Block : 480
Total Clock Samples : 1
Total Clock Samples : 2
Total Clock Samples : 3
Total Clock Samples : 4
Total Clock Samples : 5
Total Clock Samples : 6
Total Buffer Samples : 480
Total Clock Samples : 481
Total Clock Samples : 482
Total Clock Samples : 483
Total Clock Samples : 484
Total Clock Samples : 485
Total Clock Samples : 486
Total Clock Samples : 487
Total Clock Samples : 488
Total Clock Samples : 489
Total Clock Samples : 490
Total Buffer Samples : 960

‘Clock Samples’ are callbacks from the HighResolution timer and Buffer Samples are callbacks from the getNextAudioBlock. As you can probably guess, the clock samples should count up to the buffer samples before it resets for the next audio callback.

Anyway, here’s the code that renders that output (Metronome.cpp):

#include “Metronome.h”

void Metronome::prepareToPlay(int samplesPerBlock, double sampleRate)
{
mSampleRate = sampleRate;
DBG("Sample Rate : " << sampleRate);
DBG("Samples Per Block : " << samplesPerBlock);
int mInterval = 1 / sampleRate;

}

void Metronome::countSamples(int bufferSize)
{
mTotalBufferSamples += bufferSize;
DBG("Total Buffer Samples : " << mTotalBufferSamples);
mTotalClockSamples = mTotalBufferSamples;
beginClock();

}

void Metronome::reset()
{
stopTimer();
mTotalBufferSamples = 0;
mTotalClockSamples = 0;
}

void Metronome::hiResTimerCallback()
{

mTotalClockSamples++;
DBG("Total Clock Samples : " << mTotalClockSamples);

}

void Metronome::beginClock()
{
startTimer(mInterval);
}

So, is there any way to achieve a sample accurate clock?

Yes, use an atom clock :wink:

For synchronisation there are different clocks everywhere:

  • wall clock: this is the time as observed from the outside. You can compare it with the atom clock
  • system clock: the milliseconds counted since system start.
    This is how the computer is aware of time
  • audio clock: the audio driver continuously pulling samples, monotonic increasing.
    This is the most reasonable to sync to. However, if you had a buffer undelivered, you may also get here out of sync with the wall clock
  • HighResTimer: a dedicated thread for timer events.
    This is still not synchronised. It doesn’t depend on anything to be called, but at the same time, it may get out of sync to any of the other clocks.
  • gui thread: this is not synchronous to anything.
    You cannot rely on anything, since it will always wait, until no user interaction is taking precedence

TL;DR: it is your design decision, to what time you want to synchronise. The Audio clock as master makes sense, since the audio thread keeps it flowing. For video, you need to synchronise, especially since you deliver an audio block, but you need to figure out later, how far the playback is into a block. For blocks<256 samples, it might be good enough, but the frame frequency is not stable then.

2 Likes

Another remark:
synchronising to any clock outside the audio stream doesn’t make any sense, since in case of an offline bounce, which nowadays every host and plugin should support, any realtime clock is irrelevant…

3 Likes

Thanks so much for the detailed replies @daniel

By atom clock I’m assuming you mean the functions of C++ std::chrono::high_resolution_clock ? It says here it is the highest precision clock in C++.

I may have explained myself poorly - so I’m not actually synchronising any two streams. I’m recording a single data stream. The only reason I chose to create an audio stream to record this data is because it seemed the most precise way to do so. There is in fact no ‘audio’ at this stage of my application, just an incoming stream of floats that I’m recording using a varying sampling frequency.

I think this answer from StackOverflow refers to an atom clock? I’ll have a go at implementing it.

In any case I’ll report back with my findings just in case any googlers stumble across this post in the future.

This is great advice @daniel! I did try to express that these tutorials are a work in progress so I hope the coding Gods don’t judge too harshly :wink:

I was actually jokingly referring to an atomic clock. Even the time we know as most accurate is relative to something else, Einstein don’t beat me for simplifying :wink:

Oh I see.

Finally was starting to think ‘hmm maybe C++ isn’t that confusing’ then you hit me with this:

:thinking:

Hi there.

When you say slider, do you mean a physical control attached to your computer, as in, say, a potentiometer, or do you mean a graphical element on a GUI? If a physical control, how is it attached to the machine?

Hey @cesare cheers for the reply.

Currently I mean sampling a JUCE GUI slider at varying frequency.

However, this is basically a trial run to explore potential software implementations before I start sampling actual output data from hardware, exactly as you said in fact - a potentiometer.

When I do make the move to hardware, I’m unsure of how best to achieve high resolution data sampling at a variable sampling frequency. I was thinking an ADC over arduino and then just dynamically adjusting the clock speed (if that’s even possible?) to achieve the variable sampling frequency.

If you’ve got experience with any of this I’m absolutely all ears!

Recently I implemented a PhaseGenerator using AudioPlayhead to get the sample position of the DAW and then counting samples while processing the buffer. It was pretty straight-forward, so yes counting samples seems like the most accurate way.

How exactly did you do this? This is the part I’m struggling with. There’s no native callback method for each individual audio sample is there?

Right, the processing callback is per buffer. Counting samples inside the process block call is not going to be real time accurate with regards to actual real time. It’s accurate only for things like internal LFOs and such. It is very problematic trying to synchronize those calculations with other threads. The classic example being things like audio level meters in the GUI. If it happens that the audio buffer size is quite large, the meters can’t easily be made to be in sync with the audio. (Even with small audio buffers it isn’t truly accurate, but can be considered “good enough”.)

It kind of sounds like you should maybe rethink what you are trying to do. Why not record your data into a custom data structure? Audio buffers/files are most likely not going to work.

Yeah and its typically fine for anything graphic because if its happening per buffer callback that’s usually going to be every 500ish samples / ~ 100Hz which makes sense because monitor refresh rates are like 60Hz anyway.

Yep that’s the conclusion I’ve come to - I’m currently experimenting with std::chrono::steady_clock with some success.

I’ll keep this thread updated if I make any kind of major breakthrough.

You can certainly use audio inputs to sample hardware like a pot, with a bit of external electronics to produce a suitable input voltage to the ADC. Rather than expecting the ADC to run at different rates, you are best to simply sample at the ADC rate, then to downsample the data to the required rate. This can be done with a number of different techniques, but the simplest is to build a suitable digital filter to band-limit the input sample data so that you can then decimate without creating aliases. The quality of the result you are looking for will affect the quality of the filter you need to apply, so you can trade-off runtime performance vs alias rejection.

If you are wanting to do this sort of thing with a project board, i’d certainly recommend the bela boards (https://bela.io/)

We’ve been playing with these for use with SOUL, and there are excellent, with a very low latency when processing audio, and they have lots of spare inputs which can be sampled for controls like the pots you are thinking of. The sorts of applications for these boards would be guitar stomp boxes or a programmable eurorack compatible board.

1 Like

That’s exactly what I was trying to decide - it seems like dynamically adjusting clock speed (sampling rate) isn’t really the done thing. So yeah like you said, best to just over sample then let the program itself omit the values it doesn’t need.

I’m actually expecting the output values from the hardware to not be too wild in frequency range but yeah filtering it should certainly be interesting, especially given that the Nyquist frequency will be a dynamic value.

Woah, I don’t know what rock I’ve been living under but I’d never heard of Bela or SOUL. A Bela board looks perfect for my current project and SOUL looks really interesting. Definitely will be keeping a really close eye on both. Thanks so much for the 10/10 info :slight_smile: