How do I make a metronome with a synchronized animation?


First, let me just state that I am new to JUCE and extremely rusty with C++. A long time ago, I worked with puredata (which I believe is built with JUCE?). In that visual programming language, I remember an object called a “bang” which is basically just a event which can be used to trigger something. By connecting the same bang to multiple objects, you could synchronize them all to do their respective things at the exactly the same time, whether that be generating a sound or moving a rectangle across the screen. This is what I want, only I want to do it in code.

I want this to create a simple metronome with an animation that updates each time there is a “beep”. I have already figured out how to generate the beeps. The trouble comes in with the animation. Roughly my app is structured like this:

  • My main component inherits from AudioAppComponent
  • In the AudioAppComponent::getNextAudioBlock method, my main component passes the AudioSourceChannelInfo to a custom Metronome class that I created. This class then fills the buffer with the right amounts of silence or “beep”.
  • The animation happens in another custom class I named MetronomeVisualizer that inherits from Component. Each time there is a “beep”, a custom method MetronomeVisualizer::onMetroTick has to be called which tells it another metronome tick has occurred. It would then have to be repainted using the Component::repaint method.

This is approach is problematic for two reasons:

  1. Which thread should call MetronomeVisualizer::onMetroTick? This can’t happen in the Metronome class’ thread. It only fills the buffer. The buffer doesn’t get played until sometime very shortly after by the audio thread (how do I even access that?).
  2. Assuming I even knew where to call MetronomeVisualizer::onMetroTick, Component::repaint is only a suggestion. The JUCE documentation states:

“Calling this will not do any repainting immediately, but will mark the component as ‘dirty’. At some point in the near future the operating system will send a paint message, which will redraw all the dirty regions of all components. There’s no guarantee about how soon after calling repaint() the redraw will actually happen, and other queued events may be delivered before a redraw is done.”

If that’s the case, what do I have to do to guarantee that the repaint will be fast enough that it does not fall out of sync with the audio beeping? AnimatedAppComponent looks promising, but it too simply calls the repaint method at regular intervals based on a timer (which is probably inaccurate on top of it). And again, how do I get access to the right thread to even trigger the animation to start at the right time?


This isn’t a direct answer, but lets clear up this quote:

repainting at 60hz is 16ms between frames.
in terms of quarter notes, starting from a bpm of 120, you get 500ms per quarter note.
60s ÷ 120beats * 1000ms = 500ms per qn.
swap 500 for 16 and solve for beats
60 ÷ x * 1000 = 16
60 * 1000 = 16 * x
60 * 1000 / 16 = x
x = 3750 beats per minute if your metronome is clicking at 60hz.
lets divide that by 4 so the 60hz represents how fast 16th notes go by.
that gives you 937.5 beats per minute if 4 frames @ 60hz repainting represent 1 quarter note.

so, don’t clog your message thread with garbage and your calls to repaint() will definitely be fast enough to handle any tempo you need your metronome to animate.

Convert your tempo into milliseconds per beat, keep track of how long has elapsed between repaints, and based on the time elapsed, either draw your metronome lighting up or don’t.


I would say, this task doesn’t justify an additional thread, so you are good with the MessageThread, that also does the painting, and the independent audio thread.

The thing about eyes is, they are quite slow, so when you paint your metronome, is not so important. It is different for the audio thread, since we can easily hear discontinuities in the signal, but images come in as individual frames, only if there are big changes the eye will notice
(O.T. maybe you saw “Fight Club” (1999), Edvard Norton and Brad Pitt explains nicely, how many frames you can cut into a movie, before you notice… :wink: )
A century people enjoyed movies at 24 frames per second. Even silent movies, which were captured at 18 frames per second, if shown with a triple speed shutter (showing three times the same frame) are “ok” for the eye. And that’s what your display does anyway, it will show 60 times the image, so if game developer boast with their 60 fps, it’s really not noticeable.

TL;DR: make a timer something between 30 and 60Hz to call paint, which will ask the model in the audio thread, on which position to display the metronome (remember to make this number an atomic, since it will be accessed from both, audio and display thread).


Thank you both for your responses! I think this only addresses my secondary concern of whether or not the timer for repaint will be accurate enough to do the animation though. The main problem is making sure it stays in sync with the audio. I suppose I could set it up so that they both just start at (almost) exactly the same time using an event model for the play button being pressed, and just hope that descrepancies don’t build up. Maybe I’m overthinking things, and this will work fine? But I think it would be better if the animation and the audio shared a common “clock” somehow. I’m thinking that the animation has to keep track of how much time has passed since the Component::paint method was called and update itself accordingly. The question is, what is the best way to do this? Is there some counter or clock that the audio thread could share with the animation?


Well, let us know, what kind of model you want to display… basically you create the model of the movement and calculate the position for the time.

E.g. a pendulum would work with a sin (t) (for a clock, where the tick is in the middle, vs. a metronome, with the tick on each side one period should be two beats, and using a cos (t), which has the 0 on the outside.

You will have to sync that with the information from AudioPlayHead.

Does that help?


The animation is very simple. Its just 4 dots. The dots can either be full or empty (on or off). Only one dot is turned on at any given time, and each time the metronome clicks, the next light turns on.

You will have to sync that with the information from AudioPlayHead.

I don’t think that will work, because the documentation for the getCurrentPosition() method says the following:

You can ONLY call this from your processBlock() method! Calling it at other times will produce undefined behaviour, as the host may not have any context in which a time would make sense, and some hosts will almost certainly have multithreading issues if it’s not called on the audio thread.

That doesn’t work, because then I can only update the animation’s time elapsed when processBlock() is called, which I don’t believe is very often.


That is correct. But you can set a variable in the processor inside the processBlock:

// member variable:
std::atomic<int> quarter;

void processBlock (AudioBuffer<float>&, MidiBuffer&)
    AudioPlayHead::CurrentPositionInfo info;
    getPlayHead()->getCurrentPosition (info); (int (info.ppqPosition) % info.timeSigDenominator);
    // ...

int getQuarter () const
    return quarter.load();

This quarter you can use now in your paint to determine which beat to highlight…


Okay not gonna give you the deffinitive answer, but I’m in a similar boat while making a step sequencer. It may be inefficient but it’s the way I came up with, and it has no delays in painting.

The thing I do is having a Component called Step that for your case only needs 2 states: state 0 and state 1. State 1 will be the one marked by metronome, while state 0 is inactive. Each state paints the component with a colour in an if/switch structure, and has setter/getter functions that if used, the repaint() is called.
So each step is added into a matrix that will have the index associated with it (i.e matrix[0] is step 1, matrix[15] is step 16). I got a variable counter (i.e ActiveStep) that will associate with the active step, so then is just a matter of incrementing the variable by 1 each tick and if the current index in the for loop corresponds with the ActiveStep, you set that step to state 1 (hence you paint it).

That works like charm. The problem is that the “tick” isn’t developed yet because I had to move on code other things so I didn’t really come up with a solution yet. I was using a timer to do the ticks as a temporary workaround, and I guess if you don’t find any other solution you could do a lookup table/calculus associating timers with each BPM but that’s a really cheesy way. For what I’ve read from Jules the most accurate thing is to count samples so if the boss points there I bet that’s the best way you can go.


Thank you Daniel! I’m starting to see where you are going with this and it looks very promising. However, it raises a bunch more questions for me …

  • Do any of the other 30+ virtual methods have to be implemented (aside from processBlock) to do what I need to do?

  • From where is AudioProcessor::processBlock called? It does not seem to be sufficient to just implement this class and create an instance… it has to be registered somewhere and with something, or else it has to be called from somewhere in my code.

  • How frequently is this method called?

  • I assume that the quarter notes are calculated based on the AudioPlayHead::CurrentPositionInfo::bpm value. Can I change the BPM to reflect the BPM the user selects for the metronome?

  • How can I be guarantee that location in the audio buffer where I render my metronome “click” corresponds to the start of one of these quarter notes?

Thanks again for your help!


If you are implementing an AudioProcessor, yes, there are many methods to implement.

However since you seem to be using AudioAppComponent (Daniel seemed to miss that in your question and was assuming you are doing an audio plugin) in this case, there is nothing to do. You should not really inherit from AudioProcessor, unless you are planning to turn your project into a plugin later. AudioAppComponent::getNextAudioBlock is almost equivalent to AudioProcessor::processBlock. (The way the passed in audio buffer is supposed to be used is slightly different between them.)

The call frequency to AudioAppComponent::getNextAudioBlock and especially AudioProcessor::processBlock can be whatever. It typically will be something like every 64 to 1024 audio samples. (So, at 44100 Hz samplerate, between about 1.5 and 23 milliseconds.) For AudioAppComponent it will likely be the size of the audio hardware buffer.

I don’t think you will be able to even use AudioPlayhead directly with AudioAppComponent since it’s not an AudioProcessor that has an implementation for it.

You can’t easily guarantee any kind of super precise timing between the audio processing and the GUI visuals. That’s just a thing you probably will have to live with and hope that if you update the GUI visuals often enough (with a timer), they more or less match up with the heard audio.


Xenakios, yes, I am using an AudioAppComponent. The project I am working on is a cellphone app. I guess that does scrap my hopes of using the AudioPlayhead.

I have been trying the approach of rendering the animation with it’s own timer, starting both the animation and the audio at the same time, and crossing my fingers that they stay in sync. However, the results so far are completely unusable. Here is what I have done.

Inside of my AnimatedAppComponent::update method, I do the following:`

void SL_MetroVisualizer::update() {

void SL_MetroVisualizer::updateMetroCount() {

    int64 mostRecentHighResTicks = Time::getHighResolutionTicks();
    int64 highResTicksSinceLastUpdate = mostRecentHighResTicks - highResTicksAtLastPaint;

    double secondsSinceLastUpdate = ((double)highResTicksSinceLastUpdate / Time::getHighResolutionTicksPerSecond());
    double beatsPerSecond = bpm / 60.0;
    double metroTicksSinceLastUpdate = secondsSinceLastUpdate * beatsPerSecond;

    if(metroCount + metroTicksSinceLastUpdate >= NUMBER_OF_DOTS){
        metroCount = metroTicksSinceLastUpdate;
    } else {
        metroCount += metroTicksSinceLastUpdate;

    highResTicksAtLastPaint = mostRecentHighResTicks;

MY AnimatedAppComponent::paint method looks like this:

void SL_MetroVisualizer::paint (Graphics& g)
    g.fillAll(backgroundColor);   // clear the background

    if (SL_Model::getInstance()->getPlayValue()) {
    } else {

    drawDot(dot1Area, g, 0);
    drawDot(dot2Area, g, 1);
    drawDot(dot3Area, g, 2);
    drawDot(dot4Area, g, 3);

void SL_MetroVisualizer::drawDot(Rectangle<float> area, Graphics &g, int dotNumber){
    if(dotNumber == (int) metroCount){


    } else if (dotNumber + 1 == metroCount || ((metroCount == 0) && dotNumber == NUMBER_OF_DOTS - 1)) {

        g.drawEllipse(area, 1);


Not only are the metronome audio and the animation out of sync, but the animation itself is not even smooth! It jerks around, especially when the BPM is higher. Seriously, what is going on here? Am I doing something wrong?


If I understood the code correctly, you are counting time separately in the GUI thread? That is obviously not going to work. You need to count time in audio samples in the audio thread. (In the case of AudioAppComponent, in the getNextAudioBlock function.) The GUI objects would then poll that counted time in a timer callback. Aren’t you doing some sample counting anyway, in order to generate the audio?


Yes, you are correct. But how would I go about counting time in audio samples? The getNextAudioBlock basically takes in a buffer, and I believe you said above that it is called at irregular intervals. How would I know how much audio has actually been played since the last call? Isn’t that what the AudioPlayhead is supposed to do? And if I am counting time and not samples, would it be better to count there than in the GUI thread? I assume it’s because the audio thread has higher priority?


You will just have to hope the amount of samples you need to generate in the getNextAudioBlock call more or less correlates with the audio playback in real time. (Also with AudioAppComponent the buffer sizes are not likely going to wildly vary, if at all. It’s more a problem with audio plugins where the hosts can decide to do pretty much anything with the buffer sizes.)

Using Time::getHighResolutionTicks etc to me just seems to overcomplicate things in this case. In audio applications/plugins the best timing source are the getNextAudioBlock/processBlock calls. (Possibly with plugins AudioPlayhead may provide timing info that is adjusted in some manner by the host application, though.)


If I understand you correctly, what you are saying is something like the following: If the buffer has X samples in it, then I would assume/hope that X samples have been played since the last call. Is that right?


Yes, more or less. (There can always be some surprises with how things behave, so it can’t be said for sure.)

I wonder though, how are you generating your audio beeps at the moment? Surely you must already have some sample counters to determine when to start and stop the beeps? Can’t you just make your GUI poll that information?


Yes, I do have counters for rendering the tick, but you lose me after that. The buffer that is passed to getNextAudioBlock is, in fact, a buffer. That is to say, when my code decides whether to render a tick to this buffer, that tick will not be played until some time after the current buffer is exhausted, at which time my code will already be filling the next buffer. While the buffer might take, let’s say, .00003 seconds to fill, it could actually represent .1 seconds of playable audio. My counters therefore do not track anything in real time. They only serve as an indices in the buffer to fill it properly. At best, I think I could use this info to tell me if the current buffer has the start of a tick in it somewhere. But again, that tick won’t be played until later, so I’m not sure how useful that would be. I suppose it also depends on how small the buffer is relative to the silence between ticks.

In fact, in my very first stab at this, I did try using those counters. It didn’t work, and I’m pretty sure it was for the reasons I mentioned above. However, I think there may be something to your suggestion that I assume that getNextAudioBlock is called at fixed intervals and using that fact to time the animation. I’ll try this out later and let you know how it goes!


Do you maybe end up getting some unusually large buffer sizes you have to fill in the getNextAudioBlock? On desktop systems something larger than, say, 2048 samples is starting to be pretty big and would not be nice to work with for things like GUI visualizations. (Because things like note/sound on/offs might happen within the same callback and the GUI can’t easily be made to react to that information in sync with the playing audio.)


Just to put things into perspective, 20148 samples is really a worst case scenario, it is more likely to be 480 on windows and 512 on mac, but let’s roll for the sake of the experiment with 2048 samples. On an again slow running audio of 44.1 kHz this ends up with each buffer to last 46.4 msecs, i.e. 21 buffers per second. You are still in the perceivable limit for each update.

Against this stands the number of beats to be displayed:
180 bpm would end up in 3 beats per second: a visualisation here starts to become unusable, but is still technically possible.

In terms of accuracy: as long as you don’t miss a buffer (i.e. the next call was made before you filled the buffer) this is the best accuracy you can get. The gui / message thread run on a lower priority and only occur when available.
As long as you are relative to your previous tick, you are 100% accurate. A visual delay of worst case 46 msecs like calculated before can by no means be distinguished.

So the recipe would be:

  • calculate the number of samples per tick: samplerate * 60 / bpm
  • do a countdown in each getNextBuffer, if the countdown is less than the buffer size:
    • reset it to (counter - left samples in this buffer)
    • start playing your tick sound by copying it to the buffer
  • count down

If you use the countdown as an atomic<int>, you can just read from the gui, and if it is close to the max number of the countdown, display your marker, if not you don’t. You can even use the countdown value to create a nice fade out animation.

Hope that helps


I followed you guys’ advice, and the result is only slightly better than before. The animation still falls way out of sync at higher speeds (approaching 200 BPM)

Part of the reason is that the Pixel emulator that is running on my 2017 Macbook Pro seems to regularly pull in buffer sizes as high as 5040 with a sample rate of 48000. That means each block is actually 105 milliseconds. To put that into perspective, my app should be able to tolerate beat rates in excess of 200 BPM. That means that there will be a beep about every 300 milliseconds or more. 105 milliseconds is more than a 30% margin of error. That is completely unacceptable.

But, that’s not even the worst part. I also put the following code in my paint() method to get an idea of how often it was being called:

int64 now = Time::currentTimeMillis();
int64 elapsedTimeInMilliseconds = now - lastUpdate;
Logger::writeToLog("Last paint: " + String(elapsedTimeInMilliseconds));

What I see in the terminal is that it can take anywhere between 90 and 200 milliseconds between calls to the paint() method. I’ve even occasionally seen numbers as high as 500 flit past! I have tried setFramesPerSecond() with values between 30 and 500. This made very little difference, and I believe 1000 is the highest you are allowed to go. It boggles my mind that even though I set 500 frames per second, the best that I am apparently getting is 10! It really makes me wonder if this is not just a limitation of the emulator. But, it is running on a relatively new system which has 16 GB of RAM. The emulator itself is allocated 4 GB and I closed all other applications except Android Studio and Activity Monitor.

My paint() method has virtually nothing going on in it, too. Just as a sanity check, I used the ScopedTimeMeasurement class to see just how long it takes to execute:

void SL_MetroVisualizer::paint (Graphics& g)
//    int64 now = Time::currentTimeMillis();
//    int64 elapsedTimeInMiliseconds = now - lastUpdate;
//    Logger::writeToLog("Last paint: " + String(elapsedTimeInMiliseconds));
//    lastUpdate=now;

    double timeSec;

        ScopedTimeMeasurement m (timeSec);
        g.fillAll(backgroundColor);   // clear the background

        if (SL_Model::getInstance()->getPlayValue()) {
        } else {
        drawDot(dot1Area, g, 0);
        drawDot(dot2Area, g, 1);
        drawDot(dot3Area, g, 2);
        drawDot(dot4Area, g, 3);

    Logger::writeToLog ("paint() took " + String (timeSec) + "seconds");

The drawDot() method is just this:

void SL_MetroVisualizer::drawDot(Rectangle<float> area, Graphics &g, int dotNumber){
    if(dotNumber == (int) metroCount){


} else {

    g.drawEllipse(area, 1);


The output I got was that it takes about .5 milliseconds on average. So, I’m really stumped here. The only thing I can think to do is try running this on an actual physical device, but shouldn’t such a new and powerful computer be able to crush this with no problem?