Fluent animation frame timing

Yes, as you can see, I have already done that… Still needs a little bit fine tuning though. As also mentioned in that post. Sometimes it stutters a bit (most likely it gets locked somewhere in between).

However, you won’t get a fluent animation on screen this way, as already explained by jules and also widdershins. The snapshot of the current offscreen image, as you call it, will be rendered from the message thread and thus this will happen in irregular intervals, which is still going to make the animation a bit choppy.

Also, I believe that the offscreen rendering is somewhat slower than the direct onscreen rendering. BTW is that so, @jules? I guess it also depends on the renderer…

Another thing is that when the target size is not the same as that of the original image, resampling will take place which is terribly slow with the SW renderer.

I don’t get this part though… But if you mean rendering in a higher resolution and then downscaling to the target size (i.e. supersampling), that will trigger the resampling process which again is terribly slow. And I am not quite sure JUCE supports rendering to offscreen target with GL renderer… Setting the resampling algorithm to “nearest neighbor” helps a lot, but it’s still slow. I would of course appreciate any insight from someone who knows how it is with JUCE and offscreen rendering. Ehm, jules?

There isn’t an answer except for “it depends on many things”.

For most OS and rendering engines, it’ll use the native OS repaint region/callback system, and they’ll all handle the injection of paint events into the message queue in different ways. On others it does use a Timer or something similar internally, but you can’t assume anything other than “at some point fairly soon after calling repaint(), you’ll get a paint callback”.

Looking at how our code works so you can find ways to tweak your code to improve performance is probably a bad idea, because if it’s so fragile that you need to do this, then whenever something (the OS or JUCE) makes a tiny change to the way things behave (and that will happen) then it’ll break your code. Better to do something like alph suggests, probably using openGL.

But which of the suggestions? Offscreen render using OpenGL? Or any of that supersampling stuff? Currently I do offscreen render but using the SW renderer. Also I do almost everything as he describes already. Apart from double buffering, which I may consider to add, but I doubt that will help significantly. Copying of the 120x60 image shouldn’t take anything like 25 ms…

And by the way, JUCE supports float coordinates, which I do use. That should have a fairly similar effect…

And thanks to all of you… Helps me a lot.

Yes, I use to make intense graphic stuff on Mac and Windows, from time to time, and discovered that on Windows I only can count on the software renderer, which really makes things harder when coming to high performance 2D transforms.

On Mac I’m pretty sure that drawings map directly on CoreGraphics which makes things happen in a hardware layer so it really does the job faster. This is the reason for me to make your kind of stuff on Mac exclusively, where possible.

Since as said I intensely handle pixels in an offscreen image, I really improved the performance by calling setBufferedToImage(true) on the target component visible on the screen. I suppose this is because all times you ask for a Graphics for a visible component, this will take a longer time if compared to an offscreen one. So, the buffered image implementation will take care by itself to only access the visible Graphics once for all, after all that micro-drawings happened off screen.

This, at least, works as described on a Mac…

Hm, I’m on Windows here…

However, this is getting strange…

To summarize… What I already have:

HighPresisionTimerThread (60Hz):

  1. Box2D update

  2. Render to offscreen target using SW renderer

  3. Send the image over artnet.

  4. Create a copy of the rendered image with getCopy() member function

  5. Inform the component it should repaint and set it’s image to this copy via C++11 lambda and MessageManager::callAsync, which looks like this:

    img = img.createCopy();
    MessageManager::callAsync(= {
    this->imgComponent.setImageWithBroadcast(img);
    this->imgComponent.repaint();
    });

MainThread:
imgComponent which is in fact a child of ImageComponent gets repainted because it’s image was set to a new one.

The point is that if I log the time in milliseconds (getMillisecondCounter()) at the end of the timer thread, it’s almost always exactly 16ms which is OK.

If I log the time at the end of the image paint function I get something between 15 - 24 ms which I believe is just fine. I can also tell no frames are missing because I also log the index of the image I render and no index is missing at all.

So far so good, right? That’s what I wanted to achieve…

Well, yes. But that’s not what I am seeing. It still lags like twice a second… I can clearly see that on the screen (don’t know about the art-net output at this point, I will try that later). So, if the end of the paint function happens to be exactly where it should be and no images are left out… Than the lag has to happen somewhere after that. Is it possibly that the frame-buffer switch (or however the JUCE rendering on Windows work) is slightly delayed every once in a while? What could be a reason of that? No I know that the artifacts I was seeing all that time are not even related to how I render it.

I think I can also see the same effect with AnimationAppExample from the JUCE examples directory. It’s just not so obvious since the movement is not that simple and the resolution is not that low. But even then I can see it stutter every once in a while.

I know that JUCE is not a game engine and that this is not a standard use-case, but I would still like to know what happens and why… I guess I wouldn’t encounter this on Mac right? Too bad I don’t have one to try that…

Also note that in my Window I have also another component that renders the game as well, but this one uses Box2DRenderer which is somewhere in JUCE and is called normally from the main thread. On this component the stuttering happens as well and I believe that it happens at the same time as on the ImageComponent(it’s impossible to tell that because you know… There are 60 frames per second). It really looks like the lag is caused by something else after everything is painted. To me, it looks like the frame buffer switch is delayed for some reason…

A trick we’ve used in Tracktion to catch occasionally-glitching bits of code is to create a class that uses RAII to measure the time a function or block takes to complete, and which will assert/log if it takes more than a maximum duration. It’s easy to create one of those, just a few lines of code, and if you scatter them around your paint routines you can quickly track down anomalies

1 Like

sounds like a great tool to share with the community!!

I believe it is already, as ScopedTimeMeasurement

1 Like

Well, yes, though ScopedTimeMeasurement just logs each measurement. What you need for this kind of thing is a version that asserts when something takes too long, so you can catch it and see what’s going on.

2 Likes

Hm, thanks. I get the idea. But I doubt this is going to help in my case. As I said, I have already measured and logged the time in all the paint functions I had written. And the differences are alright after implementing that trick with HighResolutionTimer. That means the lag has to happen somewhere outside of my code.

Well, if I was trying to track it down, then adding checks inside the timer class itself could find out whether any other timer was taking too long. And hacking some checks in the internals of e.g. MouseInputSource could tell you whether it’s a mouse event, etc. If the message thread is getting blocked, there must be a place where you can detect what’s doing it.