I have an app that copies an image into a larger image, doing a tiling of it, and I wonder if there is a way to optimize that with cache in mind. The number of vertical/horizontal images can vary, so the final output may be arbitrarily large. My initial thought is that I want to copy the first row of source pixels into the first row of destination pixels, and repeat for the number of horizontal tiles, then the second row, etc, and repeat once I get to the next vertical tile. But I also wonder if I would affect the cache by instead of repeating the read from source to destination, that I switch to reading from the destination, after the first row read from the source (ie. the read and write addresses will be more closely related)? Or? Any advice would be appreciated.
As long as you are not using the software renderer, I think you will have a hard time beating the OpenGL/CoreGraphics renderer in terms of performance. For example, the OpenGL renderer stores all JUCE images as OpenGL textures. This means drawing the image is a single opengl command. I’m sure this is similar with CoreGraphics.
Fabian, Thanks for answering my question, but I’m not talking about rendering to the screen, but rendering to an Image object, so that I can save the results. The process takes an JPG/PNG image of X by Y, and tiles this image in an arbitrary set of Xa by Yb dimensions. Since this is a bunch of copying of memory I figured I might be able to optimize the image generation by trying to take cache behaviours into account.
I understand your problem, but it doesn’t make a difference: even when drawing to a JUCE image, JUCE will use a native CoreGraphics/OpenGL textures for this (see for example here and here) which will be blazing fast - much faster than copying memory.
So, simply create a new empty Image, create a graphics context from your newly created Image with
Graphics::Graphics (const Image&) and then use that context to draw images into your destination image with
Just noticed something: if you are creating a new Image with the regular Image constructors (and not via
OpenGLImageType), then the created Image will still use software rendering.
Edit: I thought there was an easy fix for this and promised a commit to develop. But it turns out it’s a bit harder.
Alrighty then! Thank you.
So for OpenGL, you should create the target image via the
OpenGLImageType class. Also, JUCE does support image patterns, so that would take care of the tiling for you.