Fluent animation frame timing



Hello jucers,

been wondering… How do you even make a fluent animation in JUCE? That’s such a basic thing I am really surprised not to find any questions about it. I’ll explain what I mean…

My application was a little bit laggy even in release mode, so in the end I decide to break it down to utter basics…

And I ended up with this simple AnimatedAppComponent:

//#define HIGH_RES_TIMER

    This component lives inside our window, and this is where you should put all
    your controls and content.
class MainContentComponent   : public AnimatedAppComponent
	, public HighResolutionTimer
    MainContentComponent() : img(Image::PixelFormat::RGB, 120, 60, true) {
        setSize (400, 200);
        setFramesPerSecond (60);


	void step() {
		static int xpos = 0;
		static int radius = 1.5f;
		static bool ascending = true;

		Graphics g(img);


		g.fillEllipse(xpos - radius, 30, radius * 2, radius * 2);

		xpos += (ascending) ? 1 : -1;

		if (xpos == 0 || xpos == img.getWidth())
			ascending = !ascending;    		

    void update() override
        // This function is called at the frequency specified by the setFramesPerSecond() call
        // in the constructor. You can use it to update counters, animate values, etc.
	void hiResTimerCallback() {

    void paint (Graphics& g) override
        // (Our component is opaque, so we must completely fill the background with a solid colour)
        g.fillAll (getLookAndFeel().findColour (ResizableWindow::backgroundColourId));
		g.drawImage(img, juce::Rectangle<float>(0, 0, getWidth(), getHeight()));       

    void resized() override

	Image img;


It’s just a ball bouncing from left to right in a really small image. The simplest thing you can think of…

Note that performance is not an issue. The frame rate is right what it should be and the CPU load is very low (about 7%). But obviously the timing is not right. Some frames take longer and some are faster. You could call it a jitter, or micro stuttering…

Of course the simple explanation is that the basic message based timer is just not that precise to be used for animation timing. Well, fair enough. So I decided to try the HighResolutionTimer (see the HIGH_RES_TIMER macro). But it does not help at all. I would even say it’s worse (though thats only my impression).

I think that this is such a basic question there has to be some best practice for that. How would you go about it?


//For this test I am using 64 bit Windows 7


I don’t think the high resolution timer is going to help you here - paint always happens on the main thread so whatever jitter is present on a regular timer will probably show up there too.

One thing you can do to help is to measure the amount of time since the last frame and scale your animation based on that. That way if frames are dropped or uneven your animation will still progress at the same rate.


I think the problem is rather, that you read and write the image from two different threads, in case you use the HighResolutionTimer.

Rather keep the paint in the paint() method, but do the changes, like ball position in the step.

N.B. you don’t have to write the namespace for non static methods (IMHO you shouldn’t to avoid confusion)


EDIT: actually there is more:

  • don’t use static variables.
  • if the variables are approached from different threads, wrap them into Atomic or std::atomic
  • you assign an int with a static float value: static int radius = 1.5f; :wink:
void step() {
    xpos += (ascending) ? 1 : -1;
    if (xpos == 0 || xpos == img.getWidth())
        ascending = !ascending;

void paint (Graphics& g) override
    // (Our component is opaque, so we must completely fill the background with a solid colour)

    g.fillEllipse (xpos - radius, 30, radius * 2, radius * 2);

    std::atomic<int> xpos = {0};
    float radius = 1.5f;
    bool ascending = true;

Hope that run’s more smooth


Yes, exactly. The timer itself is one thing, but since the paint() method is called later on from the main message thread, the HighResolutionTimer does not help anything.

@daniel: Guess I’ll have to explain a bit more…

I need to paint it to the image because in the original application I also send the content I rendered to an ArtNet device (Ethernet, UDP, doesn’t probably matter).

But even if I eliminate the image altogether and render it directly to the graphics object passed to the paint() function - I can still see the ball stuttering.

As for the static variables… Normally I dont use them at all. The only reason I did this was because I wanted to make a fast demo, where I could tweak the paremeters to get the best view of the problem I was trying to demonstrate. Making it a member variable would force to write it on two places (declaration, initialization). Well, this was quicker… And now we have wasted more time talking about it, then I saved by declaring them static :smiley:

Don’t know any of the abbs you used (N.B. HTH).

And yeah, int radius = 1.5f; makes little sense but come on, it was midnight already…


I see, in which case it is even more important to make sure your code is thread safe and locks properly while writing into the image. Good luck…

Blessed C++11: you can now initialise in the member list with a default value. What I typed:

    std::atomic<int> xpos = {0};
    float radius = 1.5f;
    bool ascending = true;

initialises the values already, no need to assign a value in the constructor any more…

strange… does the original AnimationAppExample stutter as well? Here on my mac it looks fluid…

Sorry, let me save you the google:
N.B. = “nota bene” (wikipedia)
HTH = hope that helps
IMHO = in my humble opinion

Anyway, seems like you have some research coming up, sorry I couldn’t be more helpful…


Should be thread safe. After all, everything happens in the message thread (the HighResolutionTimer was just an experiment).

Hm, I use C++11 quite a lot lately, but I didn’t know that. Thanks for the tip.

It’s hard to tell because the animation is quite complex and not a simple linear movement, but I think it actually stutters as well. Sometimes (once or twice a second) I can see it lag for what must be milliseconds but I think I can see that.

You may say that most people wouldn’t notice and that it’s just a detail that noone’s giong to observe but… Imagine I render just to an area 120x60 which is then stretched to something like 2x1 m area (LEDs). The distances get multiplied by severel orders of magnitude and then everybody can see that. You know that 1 px lag transform to something like 2 cm lag… And that you can definitely observe.

Of course I could have googled that. But I don’t like these abbrevs. Mostly because I just can’t seem to remember them and google them over and over again. And since the whole point of using them is to save some time…


No, you are absolutely right to see, what is possible, and where the limits of a technique are. And I must admit, although I think the AnimationAppExample looks ok to me, I wouldn’t swear, that it actually never drops a frame or jumps in any other way…

Good luck with your experiments, I think I don’t have any other idea of what to check, than I already said…



It’s not about droping frames. The thing is that each frame takes a different amount of time which results in the animation not looking fluent even though the framerate sticks to 60 FPS all the time. In real life, objects tend to have a derivation of their movement which is probably what makes it so noticable.

Thanks. I am going to need that :slight_smile:

I will probably start with what widdershins suggest, that is meassuring the exact time that ellapsed between the two timer ticks and use that as a step size…


Oh, crap…

I just don’t know what to do with that… Even when using dynamic step this doesn’t help the feeling of the stuttering. It ensures that the animation maintains it’s overall speed. But it doesn’t solve the problem that each frame is displayed for a different time interval. On the large LED screen with it’s low resolution, this is just clearly visible. I probably somehow need to synchronize the frame outputs to the exact time. Well, isn’t it the type of problem you guys have to cope with while doing all the audio stuff?


I am just throwing ideas: as you say it is especially visible on low resolutions, maybe you see rasterization artefacts?


No, I don’t think so… On the large screen nothing is absolutely fluent because the distances between the LEDs are just too large. But that doesn’t matter… That’s not what I mean. The thing is that at some points (about once or twice a second), the animation visibly lags on both the computer screen and the LED screen. As if a frame or two are dropped… But they are not actually dropped, they only follow closer to each other in the next steps (probably as the timer messages catch up).

What I think is happening is that sometimes the message based timer gets significantly delayed - maybe because some other messages have higher priority.

I have also tried increasing the process priority (even to “realtime” as windows call it) but that does not help as well. The application is probably only racing against itself.

I have two ideas what I could do now…

  1. log the milliseconds in each step and than make a graph from it (in excel or whatever), so that I can verify that the timer is really delayed
  2. Use the high precision timer, but without rendering to the screen (as that would wait for the message based timer anyway) and only render to the LED array…

I will try that and post the results. Someone may find it useful someday :slight_smile:

//edit: Forgot to mention… I have also tried 4 computers in total - each of them with different HW configuration. It’s exactly the same on all of them.


OK, I added some logging within my step function and I can definitely tell the issue is with the message timer interval. It’s just too inaccurate.

If I log the differences between the times, I get a pattern similar to this (in milliseconds):

//that 25 is not really 25, but rather something between 24 - 26 ms and the zero is not really zero but something between 0 and 1 ms.

Which is of course what makes the stuttering so visible on the large screen with low resolution.

If I replace it with a high precision timer I get nice frame intervals between 15 and 17 which would be good enough I believe.

BUT, of course there’s a catch. I cannot do any rendering there because the high precision timer runs in it’s own thread. To render there I would have to acquire MessageManagerLock but that makes no sense as I would have to wait for the inaccurate timer again…

Is there some simple trick or do I really need to use another thread and then make some smart synchronization?

I render everything in a separate image anyway and send it over ArtNet so this part on it’s own have no problem I guess. But I would still like to render it to the screen, for which I need some synchronization mechanism…


Well no, it’s incorrect to say that the Timer class is inaccurate.

If nothing at all is happening on the message thread, i.e. nothing is repainting, no other timers or callbacks are doing any work, then a Timer will be pretty accurate, probably down to about 1ms.

But anything that runs on the message thread, like Timer does, is at the mercy of being delayed by other events happening on there. Probably repainting will be the thing that causes the biggest delays. Obviously if you have a paint callback blocking for e.g. 25ms then the best a Timer can do is just to be called in-between when it gets a chance to run.

Likewise there’s no point in having another thread trigger something else to happen on the message thread because that too won’t be able to interrupt a long repaint event, and would also just have to wait until it finishes.

So all you can really do is to optimise or restructure your paining to be quicker. That’s going to be true of any framework on any OS, as message threads are the same everywhere!

(An idea I’ve had for a long time is to try a scheme where the Graphics class doesn’t do any drawing, but just records all the instructions into a list which is then actually rendered on another thread, freeing up the message thread to do other work, but don’t know when I’ll get the chance to try that one out!)


(An idea I’ve had for a long time is to try a scheme where the Graphics class doesn’t do any drawing, but just records all the instructions into a list which is then actually rendered on another thread, freeing up the message thread to do other work, but don’t know when I’ll get the chance to try that one out!)

Doesn’t has to be so complicated, basically you can do this by drawing from a background thread on an image. The whole list logic, i would say, isn’t neccessary (Of course it would be easier for existing code which relies on message thread)

But it has also one real disadvantage, the paint routine will still block the message-thread!
(especially if you have a lot of small things)

An option would be, If we have some kind of - setBackgroundAllowed(true) - flag, to tell juce that thats okay to call this function from a background thread, than juce could perform any kind of multi-thread optimization, to allow this component to be quicker repainted.


No, that’s really not the same thing - the ideal situation is for the paint routine to happen on the message thread (because you always need it to interact safely with components and other data that would be a total pain to make thread-safe), but for it to run optimally quickly, deferring the actual rendering work to another thread, and also skipping an intermediate image if possible, as images can also slow things down.
This would work really well with openGL in particular, where the rendering is already on a separate thread.


If you paint a lot small particles, the paint routine itself can be very slow. The current openGL render implementation is also very ineffective when drawing lots of vertical lines (also not stable enough to run as plugin on hundreds of costumer PCs, driver issues etc…)

The image could also exists as buffer in graphics memory, and could be painted by newer graphics APIs (Metal…)

I’m not saying that make some kind of graphic-protocoll improves a lot of things, but it also doesn’t solve a lot of problems which are the reason when you are having real performance problems.

The real problem is the interaction with the message-thread, which still happens if you create the protocol on it.

So when there is a lot of stuff ongoing on the message-thread, the graphic will still stutter, because it relies on creation on the message thread.

I think the future is more some kind of pipe structure

data-model --one-way–> worker-thread creates graphic --one-way–> display


Thanks Jules for your comments,

I am not sure however that what you describe is what happens here. This would imply having a very high or at least significant CPU load by the application, wouldn’t it? Bu I have something like 3%…

Or, there are some priorities involved. When does the paint method get called? Is it another message based timer from the main thread? When those two gets really close, I can imagine the rendering delaying my step timer (the case when the paint timer comes juuuuuuust before my timer). Does it work like this? Well, the pattern 25 25 1 25 25 1 would actually support that kind of theory… Two times it comes first and then one time second… What’s the frequency of the repaint timer (if it really works like this…)?

As for the separate thread… I believe it may actually help. I have already tried that. I added the high precision timer and in that thread I do updating of box2D world, rendering to an image and sending the content over Art-Net. At one place, I create a copy of the rendered image (while being locked in a critical section) and assign it to the main thread. The main thread only renders it’s assigned image.

I get consistent 16 ms between each frame. Though at this point it stutters every once in a while (but not all the time as is the case with message timer) which may be the case when the synchronization happens to get stuck at waiting for the critical section - I will still have to dig into that.

But you are right that if I wanted to get a fluent animation on the screen this wouldn’t help as I would still need to wait for the rendered image in the message thread. In my case, it is important to get fluent Art-Net output. That’s why I decided to try out the thread based timer…


Well that’s not what I was talking about, I’m talking about “normal” 2D vector/UI/etc graphics. To do particle stuff you need to do your own GL shaders and data pipeline.

No, the only thing it implies is that a some event (or cluster of events) - maybe paint, maybe not - is hogging the message thread for about 25ms at some point so that your timer is being held off.

The CPU level is almost useless here, because it’s an average, and you care about the granularity here, not the average load.

TBH you may get very slightly better performance than a Timer if you trigger your own callback event from a thread or high-precision timer, but only if there are many other timers running. That’s only because the Timer class uses a single callback event for all its timers, so if the event queue is congested then adding your own event will give your one more probability of getting a chance to run. But if the queue isn’t busy then a Timer with 1ms period is pretty much equivalent to just posting a message to the event queue to be run as soon as possible.


Yep, that’s what I was talking about in the paragraph right after that…

Could you please answer that part about when the paint method get’s called? Is it also timer based or does repaint() method register some callback? Looking at the code, it seems to me that it invalidates the region which I believe should trigger the WM_PAINT message. Right? And if this message happens to come before the WM_TIMER message and take long enough to finish, the result would be exactly what I am seeing, right? It’s a lot of guessing from my side because I do not really know WinAPI, nor how JUCE handles these kind of things internally…


Hi Aros, sorry for jumping into your discussion at this point. I just joined the community and am working on something someway related to your goal…

I think that the timer approach is just not the right one. Correct me if I’m wrong: you have a sort of engine which calculates new frames and needs to be called pretty regularly, and then you need to show these frames to the user on the computer display and at same time on a sort of led wall… well, here is what I would do…

Let your “engine” work with its own time base, say 80 frames per second. And each time you get a new frame you mark your on screen component to be refreshed.

This will “disconnect” the engine frame beat from that of the on screen component which will be painted in the traditional way by accumulating refresh requests (well, technically speaking, overlapping requests get aggregated into one single request working on a potentially bigger area).

When the on screen component paint() function will get called, it would just take a snapshot of the current offscreen image (the one managed by the engine at 80 FPS) and copy it over its own Graphics.

This is a simple description, but of course you’d better add double buffering techniques to reduce the time of the copy (when Component::paint() gets called) to a swap of a pair of pointers.

This will give the engine the “illusion” to be called very precisely because you will implement its own offline loop in a separate thread which won’t follow the system message priority rules… and will keep the paint() function from the component side very light. And you won’t take care of the position of your bouncing ball at this “rendering” time, because it already was taken care by the engine in its own thread loop.

Another improvement you could add is not to have a 1:1 correspondence from the offline image and the one of the Graphics object you will copy the image to.

If you manage a 2xWidth by 2xHeight offline image and scale it to the final image when needed, then you’ll have twice the pixels where to draw your bouncing ball (like in a sort of “in house” retina display). Final result is that your intermediate virtual display will support half-pixels and your ball will move way smoother.

And of course you could experiment 4x factors as well. It just will raise the CPU load a bit, but if I’m not wrong, scaling should happen with hardware acceleration (not sure about this on Windows… I’m on a Mac).

Just my contribute…