(SOLVED) Repaint() ignores opacity and repaints parent Component

We’re currently experimenting with some different approaches to rendering. If all goes well then the solution could be easy - just use JUCE’s new Metal renderer. It would be supported on Macbooks dating back to 2012 or so, and the iPhone 5s and newer. However, we still need to have a think about how to handle layers and not having to redraw everything all the time.

Before that lands (no promises on any timeline) I’ll have a look if there’s anything we can do to improve the current renderers. Using an imaged based context is one approach, but you might end up sacrificing a lot of the speed benefits of using the CoreGraphics API in a more abstract way.


Just a quick update: I got a proper caching implementation for iOS now. Works nicely. PR will follow as soon as I find the time.

1 Like

@t0m, @basteln
PR: https://github.com/WeAreROLI/JUCE/pull/572

As this is indeed to be deemed somewhat risky and may even slowdown drawing for some, I made it a Projucer option of the juce_gui_basics module called JUCE_ENABLE_IOS_REPAINT_CACHE.

implementation details:

  • At first I implemented this by employing a CGLayer - the docs sound like it is exactly what one would want An offscreen context for reusing content drawn with Core Graphics . But it turned out to be really slow. Your guess is as good as mine why that might be.
  • the implementation is entirely in the JuceUIViewController. This allowed me to override setNeedsDisplayInRect and friends so that the UIViewComponentPeer can be ignorant of the dirtyRects and it will be easier to maintain.
  • I suppose one could adopt this “technique” for macOS as well.

@pixelpacker That looks interesting, thanks for letting me know! I haven’t had a chance to try it yet…have you been using this for a couple of days without issues? @t0m I would be interested in your opinion on this as well :slight_smile:

So far, it works great - no issues. Would also be interested in @t0m’s opinion on this.

Tom’s on holiday until next week so it might be a little while until he can look at this.

thanks for the heads-up.

@pixelpacker Just a heads-up: I tried this in our project, turning one knob very quickly. Without the changes from your PR, the CPU is at 45-55%. With the changes, it goes to 100% and the animation stutters. You pointed out that drawing might become slower for some, and for my project it seems to be the case. But still, I very much appreciate your efforts :+1:

Yes, as you pointed out, the success of this method of course heavily depends on the workload. It’s will always be slower to first draw to the cache and then display the cached output than to draw directly. So it will only be faster if the painting of the other components that is prevented by the cache is slower than the added overhead caused by the caching. In my scenario it was well worth the overhead - perhaps in yours it isn’t (or something else is wrong) - hard to tell from a distance.

Despite that, CPU load being 50% or 100% is not a very good indicator for how well something performs. Just as an example: variant A could draw 1 frame per second and 50% CPU load and variant B could draw 1000 frames per second at 100% CPU load. (Not saying that this is the case here as you also mentioned, that it stutters)

Have you verified, that the paint() of the other components is NOT called when the UIImage cache is active (perhaps by adding a print)? What are the heaviest items when profiling an optimised build? Perhaps you can post (or send) an inverted call tree of the heaviest thread.