60fps / framerate synced animation repaints

I notice that if I use a juce::Timer calling repaint() on my component, I’m not able to get fully smooth animation. Curiously, when I resize my component window in macOS, I can clearly see while I am resizing that the animation is buttery smooth, so it seems that repaints caused by the platform are able to fire at a faster rate than those from a Timer.

Is there a better way to trigger repaints for animation? In the Web, there is requestAnimationFrame(), but I can’t find such an equivalent in JUCE.

Thanks

I’d need more information/an example to try to help with this, but be aware generally that the regular juce::Timer isn’t always going to be particularly accurate or consistent. So if you set your interval to 1000/60≈16 milliseconds you might get a few frames rendered every 16 ms and then a few rendered 20 ms apart, and these could easily stack up throughout execution if your paint callback is too slow.

It’s entirely unclear from this post what your issue is here, and there’s no real way to debug it without knowing more.

I would like an answer to this too, heh. With a Timer, it doesn’t seem possible to synchronize to the screen rate. The message thread comes and goes so you may miss a frame here and there, depending on current load. There’s some discussion about it here.

There are many takes to that problem. My 2 pennies:

  • the paint call should have no side effects. It only throws the current state on the screen
  • since the paint call had no side effect, it doesn’t matter, if you rendered 60 or 40 frames, as long as you have more frames than the visible frequency (24 is the empirical limit, 30 is a common value as safety margin)
  • In my video engine I synchronise to the audio as clock master, which works well

There is some frame buffering on OS level going on in the abstraction layer which makes it impossible to know the exact screen refresh times.

The Timer indeed only schedules the next paint call, it doesn’t execute it. The frequency how quickly the next repaint can actually happen is limited by the number of events currently scheduled and therefore how performant your paint and other message thread activity is.

When you say smooth animation, are we talking about the subtle stutter and micro glitches?
Something like: A static image moves at a constant rate using the “elapsed” frame delta, but somehow
every Nth frame it gets duplicated or lags and slightly stutters?

From my experience so far, I think at the moment, you get the best results with an OpenGL graphics
context and a timer that is higher than 60 fps.

To test this you can duplicate the JUCE GraphicsDemo. Select Images: ARGB Tiled.
Deactivate “Animate Rotate” and watch the movement.

I have my monitor running at 60Hz with VSync on. The timer in this demo runs at 60Hz.
Still stuttering.

Now attach an OpenGL context.
Still stuttering.

Set the timer to 120Hz. NOW the movements appear much smoother and the stuttering is mitigated so it’s almost not noticeable anymore.

The thing is: It’s still at 60 fps, because VSync is turned on!

To my understanding, the stutter is caused by duplicated frames. Why duplicated if there is enough time and we calculate the correct frame delta? Because there is a mismatch of when we submit the frame, and when it’s actually presented. And the accuracy of the timers is not guaranteed to be precicely at 1000 / 60 ms.

So why does a higher timer rate mitigate it? Apperently it’s complicated, see this article:

1 Like

Your explanation is correct in concept although the term you’re looking for is overdraw.

If it helps understand how this works in JUCE terms, the rendering system uses dirty regions to redraw Components. How that gets done is pretty complicated, but you can check this by using repaint debugging in your app (set JUCE_ENABLE_REPAINT_DEBUGGING=1 in your project).

Are you sure about this? The thing is:

The Software rasterizer uses platform dependent functions for drawing. On windows the GDI functions. You know, all that paint event, invalidate and windowing stuff. Here dirty regions are used and it only draws stuff that was invalidated. Makes sense that there could be a problem with overdraw.

But hold on.

The OpenGL context avoids GDI regions. It uses a “non-repainting” window and just “draws” the framebuffer as a quad. Here the “dirty regions” are applied to the framebuffer. Then the components are rendered to the invalidated framebuffer. This whole part here in OpenGLContext.cpp

void paintComponent()
    {
        // you mustn't set your own cached image object when attaching a GL context!
        jassert (get (component) == this);

        if (! ensureFrameBufferSize())
            return;

        RectangleList<int> invalid (viewportArea);
        invalid.subtract (validArea);
        validArea = viewportArea;

        if (! invalid.isEmpty())
        {
            clearRegionInFrameBuffer (invalid);

            {
                std::unique_ptr<LowLevelGraphicsContext> g (createOpenGLGraphicsContext (context, cachedImageFrameBuffer));
                g->clipToRectangleList (invalid);
                g->addTransform (transform);

                paintOwner (*g);
                JUCE_CHECK_OPENGL_ERROR
            }

            if (! context.isActive())
                context.makeActive();
        }

        JUCE_CHECK_OPENGL_ERROR
    }

Now, while implementing a Vulkan context I stripped all of this code.

Left over is:

  • A JUCE timer running at 60Hz.
  • A Vulkan context that uses “Mailbox” as present mode (it’s VSycned to 60Hz).
  • 1 Frame in flight, synced with Vulkan fences.

So there are no dirty regions involved at all! JUCE timers and pure rendering on a windows GDI surface (window?). It still has the same micro stutter problems!

Interestingly increasing the timer rate to 120Hz also mitigates the stutter (like in OpenGL).
But ONLY if I use “one frame in flight” instead of multiple frames being pre-rendered (so it’s the same behavior like in OpenGL with wgl SwapBuffers). This makes sense, since the delta frame time here would be naturally incorrect if there is no syncronization.

To me it seems like it’s a combination of inaccurate timers. Sometimes 15ms, 16ms, 17ms. And the problem mentioned in the previous article. That there is a mismatch of CPU submitted “present” and the GPU actually showing it at the right time.

So to me it’s not that obvious what is causing the stutter.

The inaccurate timers? The GPU<->CPU mismatch? The windowing system? The frame delta calculation? All of it together?

Has anyone achieved a stutter free display of moving objects with a 60Hz juce timer?
What are the alternatives?

IMHO The actual drawing code is still in paint() and to execute that juce needs to lock the message-thread thread which can take a while → stutter.

For smooth animation you need to put your own OpenGL instructions into renderOpenGL() - callback

Well, this is problematic. No component painting? So why even use JUCE? Additionally:

I don’t know… seems like a wrong conclusion.

It’s not stuttering because the MessageManager is not locked anymore.

It’s not stuttering because it’s called with a much higher rate (the rate of the OpenGLContext threadpool job) and Thread::wait(), like the juce timers use it, is not called. So it doesn’t depend on the inaccurate 60 Hz timer.

It has the same effect as setting the juce timer to 120Hz. It just reduces the frame delta and it appears to be smooth. Or it reduces the timespan until the next frame starts rendering after SwapBuffers executed.

As a sidenote -calling repaint() or getCachedComponentImage()->invalidateAll() at the end of paint() has the same effect: it keeps the painting job constantly alive. Not that I advise it :sweat_smile: