OpenGL: Organizing more complex scenes (sequencer)

I’m diving headfirst into the world of OpenGL and it’s going smooth so far, I’m able to render a few things and I have an OK grasp on the basics but I’d like to get some opinions on how to realistically manage a scnario of lets say a sequencer. Here’s an example from my product:

3 player

Traditionally the way I’ve done this using JUCE:

  • The sequencer is a component with paint methods for the bars and playback marker, controlled with a timer
  • Each sample on the track is it’s own component with it’s own paint method of course
  • Ditto for the custom components on the left which have the track names, gain slider, etc.

Now I’m trying to figure out how I would structure this using opengl. My initial very inexperienced thinking:

  • Each component inherits from an “OpenGLComponent” class which has the helper methods and structs for vertex and texture information
  • There is an OpenGLRenderingManager who organizes the scene based on vertex shader/frament shader
  • Each frame the manager runs through the different shaders and organizes the glDraw calls by looping over the different types of components and updating the uniforms in the shaders with the component information - the shaders take care of transforming from pixel coordinates to their local space (-1 to 1)

I’m aware I know very little but would the above make sense? Or should each component have it’s own rendering loop? Also, should their be one openGlContext for the whole scene or is it OK to have multiple per object?

Any advice appreciated!!

What did you eventually decide about this? I am curious about this as well

+1 , I am also interested on this one

You should definitively go for a single OpenGL context. Especially on Windows, multiple contexts running in one process are extremely slowing down the application as there are multiple rendering threads which all try to synchronize to the GPUs frame rate through some locking :grimacing:

It’s not so difficult to build a class that owns the single context and manages all renderers. In the render callback, it should set the viewport to the bounds of the component to be rendered first and then call its rendering function. This way, each GL component can render on a normalized GL coordinate system that fills the complete surface of the component, which makes things a lot easier

Thanks for the insight. I have something working with with the method you describe, it’s more or less efficient due to my use of opengl draw calls which aren’t optimal (calling DrawElements for each component), but hey one step at a time.

Scratching my head though how I’m going to do some of the more complicated shapes, paths, etc. Example, the wavform which I typically just use the juce method for drawing waforms, rounded rectangles for buttons, drawing the sliders and other ui widgets, etc.

You are aware of the fact that you can simply attach the GL context to your top level component and then all JUCE widgets are rendered by OpenGL behind the scenes without the need to manually do anything? Writing your custom shaders only makes sense for all kinds of (realtime) data visualisation .

Rendering a basic waveform is not that complicated, you can simply use a triangle strip to fill the space between upper and lower bound values. However, this will not create smooth edges. In case you want them, you need to create some more complicated triangles. But as this is some general computer graphics stuff, you’ll find a lot of resources on that topic on the internet!

2 Likes

Yes, I’ve since learned you can attach the context to the top level component, but I was under the impression that if I went the full-custom route I could improve performance more. Also I wanted to try some funky stuff with the shaders to get some nice lighting, blending, and possibly 3D effects in there as well.

1 Like