Using OpenGL for 2D Components


A few questions on the use of opengl for 2D components.

I noticed that by attaching an OpenGLContext with a Component, the regular `paint’ method automatically uses an opengl frame buffer - this is really cool. I’m wondering which is the most efficient way to use images in this case.

For example, i create the same image in two ways, first as a normal Juce image and secondly as an image of OpenGLImageType'. I understand the second lives in the opengl frame buffer. Now, it turns out that drawing both images inside the paint method work. Would i be correct in assuming the one of OpenGLImageType is more efficient, ie not requiring copy or conversion - or is it the same. Additionally, is there something else i should be doing to make maximum use of the opengl system here. If i'm applying a transform to these images, will this be processed by opengl in an efficient way? eg usingdrawImageTransformed’.

One other question is that if i try to perform some opengl 2D operations in `renderOpenGL’ (in addition to my paint stuff), it blows with opengl “out of memory”. not sure why this is. use of 3D stuff works here. this might be due to any use of the OPENGL_FIXED_FUNCTION stuff. am i correct to assume this is to be phased out.

Thanks for any answers.

– hugh.

Yep, the best way to store your images is as an OpenGLImage, because it’s right there in the GPU memory. It’s quite quick to use a normal image, but it does mean that the image data gets copied to the GPU data each time you draw it (…it could be possible to come up with some kind of cunning cache algorithm for that, so that unchanged image data is cached in the GPU, but I’ve not had chance to try that yet).

But either way, all the transforms will be done by the GPU, so are basically free.

No idea why you’d run out of memory, but yes, I’m going to get rid of the old fixed-function stuff, the world is definitely moving away from that sort of thing and towards shaders instead.

Is this the same for paths as well? Every time you draw a path does it rasterize the path and get copied to the GPU? Or is there some caching there?

I’m wondering if you have any caching code in place I may be able to re-use for the D2D renderer.

No, there’s no caching with paths, the points are always uploaded each time. I don’t really think it’d be too easy to cache them.

I have a (set of) images that I read from an input stream, and they have to be redrawn about 20 times per second using overlays. So I thought to make the animation smoother I’d use OpenGL. I attached an OpenGLContext to my top level component, all fine. Now if I understand the above correctly I could gain even more by converting my images to OpenGL images. I tried to do it the following way:

imageInputStream = imageurl.createInputStream(false);
bgimage_opengl = openGLImageType.convert(bgimage);

but get a jassert in the convert routine, complaining about no active context. I thought I had an active context by attaching one to the top level component? Or how is this supposed to be done?

In openGL, contexts are thread-local - you can’t use any GL functionality outside of either your render routine, or the newOpenGLContextCreated() method.

Is the newOpenGLContextCreated() method still around? If no, that would only leave the paint() method, if I understand that correctly?

Yes, it’s still there (?)

Right, I thought it was a method of the old OpenGLComponent.
So to use OpenGL images I have to make my class an OpenGLRenderer, correct? I thought it could be done the “quick’n easy” way with just attaching a context to the component.

Well it can.

But if you want to use the newOpenGLContextCreated method, then you’d need to implement the OpenGLRenderer interface too.