OpenGL speed issues


I am doing some 2D drawing and having some performance issues.
I am drawing multiple layers some of which may have transparency.
I can either draw from back to front and have each layer draw on top of the previous layer, or possibly have a shader draw front to back and only keep processing layers and mix the colours if a layer is not opaque.
I have tried code from both the Demo (both OpenGlDemo.cpp and OpenGLDemo2D.cpp.
I am attached to a GlContext for hardware rendering.


If I draw the layers in the RenderOpenGL() method, it draws 60 times / second.
If I draw the layers in the Paint() method, it only draws when something changes, but there a few things which trigger way too many repaints and are slow.  (hovering over a slider, zooming, panning).

I have about 15 possible "layers" which may be on or off.
For the rest of this question, I will just talk about 3 layers, to simplify the discussion, but keep in mind that I need an efficient method, not just one that will work on a layer because it is small.
The back layer can either be a solid colour, or an image from a file.
The next layer is a pattern (list of vertices)  (it can be transformed.  I.E. rotated, resized, relocated, flipped etc.)
The next layer is a cross-hairs indicating a position relative to the pattern.
right now, I keep a matrix (AffineTransform) with the rotation, scale, and translation of the design and a list of the vertices that define the design.
I also have a viewMatrix (Zoom and pan)


What I think is happening now:
in either paint() or RenderOpenGL(Graphics& g) I apply the viewMatrix with g.AddTransform() and then draw everything with g.drawImage(),  g.strokePath() etc.  (I use begin/end TransparencyLayer if necessary)
I believe this is rendering the path into lines in software with each invocation, and using the hardware to render the lines which make up the juce::Path
There is a trade-off when running the code in paint() vs RenderOpenGL()  paint generally gets called less (only when something changes, but in the worst case (pan, zoom, hover over scrollbars) it is horrendous)
RenderOpenGL() tries to run about 60 times per second.


Although they both use hardware to render the lines, most of the performance hit seems to be in constantly giving the data to the card using g.draw??? and g.strokePath() etc.


What I would like to be able to do, but can't see how and don't know if it is possible:
supply multiple sets of affineTransforms and data to the video card along with a Transform for the viewMatrix (Pan and zoom)
Update only the data which changes.
Have the hardware constantly redrawing the data it has.
For example:
I somehow send an image to the card.
I send a matrix with and a pattern to the card (like drawing a castle as a line drawing)
I send the cross-hairs to the card (2 lines).
I rotate the castle, so I just update the matrix associated with the castle data and not have to resend all the transformed points to the card.
The user then pans and zooms in on a piece of the picture, so I just have to update the view matrix and the card redraws everything using the new viewMatrix AffineTransform without me having to resend the Image and design and cross-hairs matrices and data.
Is that possible.  It seems much more efficient that continuously resending data.


I need this to run on windows, Android 4.2, and possibly OsX and Linux in the future.


One final note. due to the hardware we are using we would need to use OpenGL 2.2 Es or lower so if this is only possible using a higher version of OpenGL, we are kind of hosed but I would still like to know.

 

Kismet

 

If content is not updated that much you can try to draw in a texture as a cache then rotate the texture drawing