OpenGL + GraphicsContext::drawImage Performance

I’m doing some profiling and optimization for a meter Component that has its own OpenGLRenderer:

In the OpenGL render callback I use GraphicsContext::drawImage() to copy over the desired section of the meter image. The bulk of the cycles are spent on OpenGLTexture::release() which in turn calls glDeleteTexture().

I’ve looked at OpenGLImageType & OpenGLFrameBuffer but I’m not sure how I’d use them for this purpose, as it seems there is already some image caching going on?

I suppose ideally I’d just like to store the image in the GPU & just copy over the desired pixels from there but I’m not sure how to go about doing so. If anybody could help point me in the right direction that’d be much appreciated!

1 Like

Are you using a subsection image?

If you just have a normal image, and don’t delete it, then the first time you render it, the GL code will cache and re-use that texture so that subsequent draws are super-efficient. But of course if you mess with the image or keep creating a new one, etc, the cached texture has to be thrown away.

I’m using

void drawImage (const Image& imageToDraw,
                int destX, int destY, int destWidth, int destHeight,
                int sourceX, int sourceY, int sourceWidth, int sourceHeight,
                bool fillAlphaChannelWithCurrentBrush = false) const

which calls Image::getClippedImage (const Rectangle<int>& area) const so I am indeed using a subsection of my normal image. I think I’m missing something important here. Is there a way to cache 2 images (the full height background and foreground images) & just move the desired pixels into the active frame buffer per render frame?

Hmm, yes, that would be an edge-case where it’d be repeatedly re-caching the image. I’d suggest just creating a copy of the subsection you want, and draw the whole thing, instead of using this particular method to draw it.

Or could you just reduce the clip region and draw the whole original image? Then it would appear cropped.

The tricky part is that I’m using a different subsection of the image for (almost) every render frame. When the meter value goes up or down, the subsection bounds change & a new subsection of the meter image is needed.

@dave96 How do you mean? Reduce the bounds of the context?

Yes so presumably you have a background image and a foreground image that compose your meter? Then you draw a proportion of the foreground image (anchored at the bottom) to represent the levels?

If you simply clip the bounds of your foreground drawing so it only draw from the bottom to the current level, won’t that have the same effect as getting a clipped image?

Yes there’s a foreground & background image.

I use a (similar?) technique for regular JUCE painting via Component::repaint (const Rectangle<int>& area). On timer callbacks I mark the changed section as dirty using this repaint & then use those bounds in paint() via Graphics::getClipBounds() to effectively only paint the required section. But even then I’m splicing in a subsection of the image.

Are you suggesting something similar for my OpenGL renderOpenGL() callback? How would that be implemented? It seems like the attached OpenGLContext requires a full repaint of the bounds of the Component it’s attached per render callback.

btw I have the settings:

    openGLContext.setRenderer (this);
    openGLContext.setContinuousRepainting (true);
    openGLContext.setComponentPaintingEnabled (false);
    openGLContext.attachTo (*this);

and then to use it:

void MeterComponent::renderOpenGL()
{
    std::unique_ptr<LowLevelGraphicsContext> gc (createOpenGLGraphicsContext (openGLContext, getWidth (), getWidth ()));
    if (gc == nullptr)
        return;

    Graphics g (*gc.get());
    paint (g);
}

(I pulled that last bit from the OpenGL demo in the Demo app)

Aight, here is a quick solution that produces good results in a snap without having to dive too deep into OpenGL land: Load the 2 images as OpenGLTextures and then in renderOpenGL() use OpenGLContext::copyTexture().

Yay JUCE!