Smooth animations in GUI?

Our audio plugin has an animation that’s synced to the playback (it visualizes playback speeds).
The animation consists of a filmstrip in a custom Component that we update at 25fps.

The Component is a Timer set to 25Hz, and in timerCallBack() we call repaint() via the Message thread (MessageManager::callAsync() { repaint() }).
I suppose the async call is why the animation is a bit jittery at times?

Is there a better way to do this? I feel like I might be missing something obvious!

juce::Timers aren’t designed to be accurate, they’ll just give you a callback roughly N times a second. Similarly, calling repaint has no guarantees as to when your Component will actually be asked to draw itself, it’ll just be some time in the future.

Because of both these things, it’s not a good idea to assume in any animation drawing the the time between the calls is consistent so you should instead query the time taken between calls and use that to adjust your animation.

void paint()
    const auto now = juce::Time::getMillisecondCounter();
    const auto timeSinceLastPaint = now - m_lastPaintTime;

    // Update animation...

    m_lastPaintTime = now;

Thank you!

The phase is already continuously calculated in the audio processor every 32 samples, so the paint() method just reads the variable and draws the animation based on that. Which means that the Component doesn’t make any assumptions about the elapsed time between frames…

So I guess my problem is that paint() sometimes won’t get called for a while? There’s no fix for that, right?

If paint() is not called for a while you will notice that on all GUIs. Even the hosts GUI would start to stutter, because it shares the message thread with your plugin.

When you see jumps in the animation (I use a slightly higher number of 30 FPS), that usually means that the model doesn’t move smoothly. That’s what @ImJimmi was pointing towards, if the paint interval is variing the animation might start to become unsteady as well. So using the actual elapsed time to calculate the new position of the animated object would help.

Since you mention the value is updated every 32 samples it is unlikely that this is the root cause.
But when you say you update every 32 samples, does that mean you have a block size of 32 samples? probably not.
Your loop in processBlock will run to the usual 512 samples, i.e. doing 4 updates in a very short time and then wait a while before doing the next 4 updates.

When people say the processing is real time, they actually mean it is fast enough to be played back continuously, but apart from that the processing clock and the presentation clock are independent.

1 Like

My DSP code does run in 32-sample blocks internally. I implemented an “adapter” in my AudioProcessor sub-class. So if it’s asked to produce 512 samples, it will call the internal process() function as many times as required to fill the host buffer, i e 16 times.

This lets me use fixed-size temporary buffers in my synth, and it makes it very easy to implement stuff that doesn’t necessarily need to be updated every sample, like filter frequencies and LFO’s, while still having the predictability of a known block size.

Anyway, I’m suspecting that my GUI might be starting to get a little heavy (because I’m drawing a lot of film strips now!), so I think I’m going to have to start profiling it to see what I can do to make it run faster…

FWIW the Timer class always invokes timerCallback() on the message thread, so you can just call repaint() directly instead of using MessageManager::callAsync().

That’s a very good strategy. However that is exactly the scenario I was talking about (except that the loop will run 32 times as you correctly pointed out).

The updates inside the loop happen quickly after each other and then the audio thread moves on to the next plugin or whatever it has to do.
If the GUI is now checking in linear time intervals, it will almost always see the last updated states.

To get that aligned properly you would need to keep all states you created in that loop together with a timestamp. The GUI would need to figure out the right state to display each time.

That is always a good idea, although I think it’s unlikely that the problem is the painting time, unless you see the GUI really stalling.

Ah, of course! I had to add the async call for listener callbacks from the DSP code, but it was of course totally unnecessary to do it in the timer callbacks… Thank you!
While it didn’t solve the stuttering animation problem, it was great to get rid of those extra async calls. :slight_smile:

I see! Yes, that makes sense. Thanks for clarifying!

Although I think it would be a bit of a pain to keep track of all the states and time stamps and interpolate between them…

Maybe it’s better then to just have the GUI read the playback speed from the audio processor and do its own calculations on how far to advance the animation? That way I would only need to keep track of the current time stamp and the time stamp of the previous frame, like @ImJimmi suggested from the beginning.

While speed variations during the course of the buffer will be lost, I think that should be a non-issue in this case, where smoothness is more important than accuracy.

I’ll try it out! :slight_smile:

I agree, the pains of implementing that time conversion and FIFO of states doesn’t seem justified.

It is also good to keep in mind, what the target update rates are.
Assuming a sample rate of 48 kHz the 32 sample blocks are calculated 1500 times per second, while you are visualising more likely at 30 FPS (factor 50).

It really depends on what you want to visualise.

Open Gl
With continuous painting would solve this. Plus you won’t get lag in logic.

1 Like

I tried using Open GL in another plugin before, but it wouldn’t play nice with window resizing in Reaper, so I abandoned it. (When making the window smaller, the GUI would be “anchored” to the lower edge of the window, so the top of the plugin would disappear first, probably due the inverted y axis of the Open GL coordinate system.)

But this was about 2 years ago, so it might be worth trying it again?

If the data that is being visualised is updated in irregular intervals (which we kind of established now) OpenGL would expose that fact even worse.
Understanding the problem is the first part of the solution.

In Herit , public juce::OpenGLRenderer
Editor Header:
juce::OpenGLContext openGLContext;
void newOpenGLContextCreated() override;
void renderOpenGL() override;
void openGLContextClosing() override;

Constructor: (After size is set)
openGLContext.setComponentPaintingEnabled (true);

→ use this like paint
void PluginAudioProcessorEditor::renderOpenGL()
auto desktopScale = (float) openGLContext.getRenderingScale();
float height1 = desktopScale * 675;
float width1 = desktopScale *675;

std::unique_ptr<juce::LowLevelGraphicsContext> gc (createOpenGLGraphicsContext (openGLContext, width1, height1));
if (gc == nullptr)

juce::Graphics g (*gc.get());

// in here multiply your pixels by the desktop scale.

Then you can use that function like your paint method with Juce’s design and to my understanding it processes on the graphics card instead of the processor. Also when I used a timer callback with around that same repaint() speed from my animation on Drip. It freezes most Mac daws background.

The old open GL did have weird stuff with plugins around two years ago when I released X. Like it wouldn’t let me open more than one plugin. However, I just did an update to open gl rendering the dynamics are way smoother.

I know I’m not the best programmer and hack stuff together. As @Daniel mentioned if your just displaying the data wrong then its wrong. But trying to sync the audio and graphics thread as always been a head ache. At least with the continues repainting you won’t have have to deal with 50 ms jumps.

I think OpenGL is 60 hz. You can still use you timer callback just don’t repaint.

Idk it helped me and I’m done with the paint method for now because it will slow down MacOs.