I’m diving headfirst into the world of OpenGL and it’s going smooth so far, I’m able to render a few things and I have an OK grasp on the basics but I’d like to get some opinions on how to realistically manage a scnario of lets say a sequencer. Here’s an example from my product:
Traditionally the way I’ve done this using JUCE:
- The sequencer is a component with paint methods for the bars and playback marker, controlled with a timer
- Each sample on the track is it’s own component with it’s own paint method of course
- Ditto for the custom components on the left which have the track names, gain slider, etc.
Now I’m trying to figure out how I would structure this using opengl. My initial very inexperienced thinking:
- Each component inherits from an “OpenGLComponent” class which has the helper methods and structs for vertex and texture information
- There is an OpenGLRenderingManager who organizes the scene based on vertex shader/frament shader
- Each frame the manager runs through the different shaders and organizes the glDraw calls by looping over the different types of components and updating the uniforms in the shaders with the component information - the shaders take care of transforming from pixel coordinates to their local space (-1 to 1)
I’m aware I know very little but would the above make sense? Or should each component have it’s own rendering loop? Also, should their be one openGlContext for the whole scene or is it OK to have multiple per object?
Any advice appreciated!!