One OpenGLContext with multiple OpenGLRenderers?

Currently, I have two separate windows and I want them to have their own (independent, so they could have different update frequencies) rendering threads. So far so good. However, I also want them to share an OpenGLContext so that they can access each others resources, but I can’t get this to work. I’ve tried

 openGLContextB.setNativeSharedContext(openGLContextA.getRawContext());

but to no avail. At least, OpenGLContext::getRawContext() returns different values for both, but maybe there is a better way to test this?

Ideally, I would also like to initialize my OpenGLContext during my program startup (so before it starts drawing components). Is there a way I could solve these issues by somehow creating a NativeContext during startup?

A slightly elaborated explanation of what I’m doing: The primary renderer is creating real-time images on one screen, with this rendering currently being done in OpenCL (might switch to Vulkan/Metal in the future). The secondary renderer shows the GUI with the controls and should be able to display a small thumbnail of what is rendered by the primary renderer, hence I want access to the final displayed image of the primary renderer.

Let me know if you want some more information and thanks in advance!

1 Like

Does anyone have any input on this? It’s really starting to become an issue and I’d like to solve it.

At least, OpenGLContext::getRawContext() returns different values for both, but maybe there is a better way to test this?

I’m still experimenting myself, but on second thought this doesn’t look like the right way to test this. So what would be the best way to test if two contexts are actually “shared”?

I would also like to know the answer to this question. Can you use multiple OpenGLRenderers with a single OpenGLContext?

From the JUCE source code, it seems that any OpenGLContext can only have a single OpenGLRenderer.

Yet, from @fr810’s prior suggestion in the forum:

If you have multiple components requiring opengl and you want to render all GUI components with OpenGL, then I strongly advise that you attach an OpenGLContext to the top-level component only . The top-level component can then keep track of all the subcomponents which require renderOpenGL callbacks and simply forward the renderOpenGL callback to them.

Here, Fabian suggests that a top-level component attached to an OpenGLContext could be used to keep track of multiple Components, some of which may be OpenGLRenderers that have the renderOpenGL() callback. This makes it sound like a single OpenGLContext can be used for multiple OpenGLRenderers.

From what I understand, this “keeping track of subcomponents that require renderOpenGL() callbacks” is NOT automatic since an OpenGLContext can only be associated with a single OpenGLRenderer. Therefore, I believe the renderOpenGL() callbacks must then be manually called by a top-level OpenGLRenderer attached to the OpenGLContext.

Here is a stubbed out idea of how I believe this might work. @fr810 is this a correct interpretation of your suggestion?

/** The Top-most component of the application which we wish to be
    rendered using OpenGL. It contains any JUCE Components
    (or hierarchy of Components) which we wish to be rendered using
    OpenGL as well as any custom OpenGL-based JUCE Components
    which will render by calling lower-level OpenGL commands and using
    shader programs.
 */
class TopLevelComponent : public Component,
                          private OpenGLRenderer
{
public:
    TopLevelComponent()
        : myGLComponentA (&openGLContext),
          myGLComponentB (&openGLContext)
    {
        // Sets the OpenGL version to 3.2 and above
        openGLContext.setOpenGLVersionRequired (OpenGLContext::OpenGLVersion::openGL3_2);

        openGLContext.setRenderer (this);
        openGLContext.attachTo (*this);     
        openGLContext.setContinuousRepainting (true);   

        // Add custom OpenGL Components 
        renderers.push_back (dynamic_cast<OpenGLRenderer *> (&myGLComponentA));
        renderers.push_back (dynamic_cast<OpenGLRenderer *> (&myGLComponentB));
        addAndMakeVisible (myGLComponentA);
        addAndMakeVisible (myGLComponentB);

        // Add regular JUCE Components
        addAndMakeVisible (slider);
        addAndMakeVisible (button);
    }

    ~TopLevelComponent()
    {
        openGLContext.setContinuousRepainting (false);
        openGLContext.detach();
    }

    void newOpenGLContextCreated() override
    {
        for (auto * renderer : renderers)
            renderer->newOpenGLContextCreated();
    }


    void openGLContextClosing() override
    {
        for (auto * renderer : renderers)
            renderer->openGLContextClosing();
    }
    
    
    void renderOpenGL() override
    {
        for (auto * renderer : renderers)
            renderer->renderOpenGL();
    }

private:
    OpenGLContext openGLContext; // Single shared context

    // Custom OpenGL Components
    std::vector<OpenGLRenderer *> renderers;
    MyOpenGLComponent myGLComponentA;
    MyOpenGLComponent myGLComponentB;

    // Normal JUCE Components (get rendered with OpenGL due to context
    // attachment to this parent Component)
    Slider slider;
    Button button;
};

//=================================================================

/** An OpenGL-based JUCE Component that has custom OpenGL rendering.
   
     There could by any number of different custom OpenGL-based
     Components similar to this one which could all be set as
     children of the TopLevelComponent and be added to the
     `renderers` vector to be driven by the top level OpenGLRenderer.
 */
class MyOpenGLComponent : public Component,
                          public OpenGLRenderer
{
    MyOpenGLComponent (OpenGLContext * externalOpenGLContext)
    {
        context = externalOpenGLContext;
    }

    void newOpenGLContextCreated() override
    {
        // Compile shader programs . . .
        // Needs access to the OpenGLContext context member to 
        // constructing a OpenGLShaderProgram.

        // Setup any needed OpenGL buffers . . .
        // Needs access to the OpenGLContext context member to 
        // generate/bind buffers.
        // Ex: context->extensions.glGenVertexArrays (1, &VAO);
        //     context->extensions.glBindVertexArray (VAO);
    }


    void openGLContextClosing() override
    {
        // Cleanup any OpenGL buffers . . .
        // Needs access to the OpenGLContext context member to deallocate
        // Ex: context->extensions.glDeleteVertexArrays (1, &VAO);
        //     context->extensions.glDeleteBuffers (1, &VBO);
    }
    
    
    void renderOpenGL() override
    {
        // Do some custom rendering . . .
        // Needs access to the OpenGLContext context member for things
        // like rendering scale: context->getRenderingScale()
    }

private:
    OpenGLContext * context; 
};

I am unsure if this helps solve @YetAnotherGuy’s multi-window issue, but I was at least hoping to answer this thread’s main question for the case of a single window.

Hello,
I posted the question to the answer you cited. The idea you are describing matches basically the way I solved it and it’s working fine for me. But I don’t think that this is an answer to the OP’s question. In this example everything is handled in one render thread and everything is updated at the same time.

1 Like

@janhanten Were there any other special things you had to do to get a single OpenGLContext working with multiple OpenGLRenderers? I have just implemented the above code into my project and am struggling with getting my OpenGL based components (like the MyOpenGLComponent in the above code example) showing up at the proper place on the screen. It also seems that regular JUCE components are painting over my OpenGL based components.

Did you run into any of these problems?

I see that part of my issue was due to needing to update my custom OpenGL Component’s glViewport call to properly position X and Y on the screen.

Regardless, it seems I am unable to get my custom OpenGL Components to be rendered over the top of regular JUCE Graphics Components using a single top-level OpenGLContext with top-level OpenGLRenderer which calls renderOpenGL() on any child Components who are OpenGLRenderers.

If anyone is able to achieve this mix of regular JUCE Graphics Components and custom shader-based OpenGL Components using just a single OpenGLContext, I would be interested to hear how this was done.

@TonyAtHarrison, I came across another thread where you described your OpenGL setup as:

In this instance were you using multiple OpenGLRenderer's as individual Components which were associated with a single managing OpenGLContext and OpenGLRenderer as described above? Could you offer any help to this general question?

It also seems part of my issue is that I need to “leave holes” in JUCE graphical Components as you describe here:

No because this is the way I want it to behave. I am rendering with glViewport to sub areas of the UI. Sometimes I need labels or buttons for input. This is done with normal juce components rendering above those areas.

1 Like

Ah I see, it also seems other developers work around the behavior of JUCE painting over the OpenGL components by “leaving holes” as mentioned in my last post in this thread. So JUCE painting over the top is no longer an issue I am dealing with since “leaving holes” is the solution.

But, I am still dealing with proper glViewport rendering. @janhanten How do you properly handle glViewport?

I too am trying to accomplish this by using the following code:

const float renderingScale = (float) openGLContext->getRenderingScale();
glViewport (juce::roundToInt (renderingScale * (float) getX()), 
            juce::roundToInt (renderingScale * (float) getY()),
            juce::roundToInt (renderingScale * (float) getWidth()),
            juce::roundToInt (renderingScale * (float) getHeight()));

For some reason, my viewport properly moves to the location specified, but, my 2D shader quad stays locked all the way to the left side of the screen. If I resize my inteface, I can move my viewport over to the left side of the screen to reveal my shader quad, but as I resize, the viewport moves away from my shader quad to a black background. It seems my shader quad is not properly locked to the GL viewport.

Just quickly scanned over this topic and don’t have a lot of time to go into detail right now, but maybe this helps you. I built something like that multiple times now, the first implementation of some multi gl renderer approach is open source and you’ll find it here

Bounds computation:

Applying the viewport and scissor test

Note that I didn’t look into that piece of code for quite a while. It works by assigning the top level GL component containing all subcomponents to the WindowGLContext which defines the overall coordinate system and then calculates the relative clipping bounds for the subcomponents.

I came up with even less complicated bound computation solutions in later (closed source) projects which I can’t share here, but maybe this helps as a starting point :wink:

2 Likes

For some reason I am still struggling with glViewport related viewing of my custom GL components connected to a single OpenGLContext. I have tried connecting my existing visualizations to @PluginPenguin’s WindowOpenGLContext, but have been unable to get my particular GL visualizations to render at the correct location in the viewport. One of my visualizations always renders in the bottom left corner of the window and becomes hidden the further I move the component bounds away from the bottom left corner of the window. This may be some issue in my shader program for that Component. Does my shader program need to receive x and y coordinates of it’s Component’s intended placement in the parent component? I already provide the resolution (width, height) of the Component to my shader program as a uniform, which seems to work great.

Also, I found another forum post which details a bit more about @TonyAtHarrison’s approach of using a single OpenGLContext with multiple OpenGLRenderers which has been helpful to read:

I wish JUCE had this kind of functionality built-in to encourage use of a single OpenGLContext. Using the JUCE OpenGL classes for some time now (3+ years), I have only recently discovered that using multiple OpenGLContexts is bad practice and causes glitches, especially in plugins. If there had been some kind of singleton OpenGLContext manager class similar to @PluginPenguin’s WindowOpenGLContext in JUCE, I might’ve been aware of proper OpenGLContext handling practices much earlier.

First a side note: The WindowGLContext class was actually a pull request to my repo back then by @adamski :slight_smile:

Generally, your shader program should not need to know the Components size via uniforms. After calling glViewport, the bounds of the viewport map to a normalised OpenGL coordinate system that ranges from -1 to 1 on the x/y axis, the so called normalised device coordinates https://mdn.mozillademos.org/files/11371/clip-space-graph.svg.

I assume you’ll already be aware of it, but just to make it clear: Rendering at the bottom left corner of your viewport would mean rendering at (-1, -1, 0), no matter which size your viewport has. Also make sure to keep in mind that beneath the -1 to 1 range the GL coordinate system is different to the JUCE Component coordinate system as the y-axis is inverted.

You can still use the vertex shader to transform JUCE-style vertex coordinates into NDC coordinates (which would be a reason to pass the size of your component to it via uniforms). So what kind of content are you trying to display? 2D or 3D rendering? Often drawing some super basic triangle with known coordinates helps fixing bugs at such a stage during OpenGL development.

2 Likes

Thanks for all the help you’ve given!

I think something that is complicating this problem which I should’ve made clear earlier is that I am rendering 2D content that uses fragment shader based drawing with distance fields. My vertex shader simply renders a quad of four vertices (the ones on the z=0 plane in the coordinate image you provided), but the fragment shader handles the coloring of each pixel on that quad, needing to know resolution information to scale the visual properly. Do you have experience using this kind of rendering in this context?

In the case of using distance fields, I believe it is necessary to pass at least the Component resolution as a uniform to the fragment shader, so each pixel can know its properly scaled location. I would bet in the case of using a single OpenGLContext with multiple OpenGLRenderers I probably also need to pass x and y screen coordinates of the Component location. Would that make sense?

I am betting this distance field based rendering is my issue. I am going to try to see if passing x and y Component coordinates will get this to work. Good to know I will need to translate from JUCE to GL coordinate system for this too.

Here’s an example video of what I am dealing with:


The video shows the plugin window. There are multiple OpenGLRenderers being shown here, but most of them are the solid orange color. The renderer of interest is the one with the black background with a white circle in the middle. As I click on different screen locations, you can see the Component bounds of my renderer moves, but the distance field circle rendering does not move with it, but stays locked in the bottom left corner.

Also @adamski thank you for the WindowGLContext class :raised_hands:! That has been extremely helpful in figuring out a single OpenGLContext implementation.

1 Like

I found that my problem was that I needed to provide x and y coordinates to my fragment shader as uniforms so it could draw with distance fields properly. The coordinates I provided were the coordinates of the Component view relative to the parent Component which is attached to the OpenGLContext, but additionally, I had to translate the Y position to be the bottom left corner of the component view instead of the top left. This is because JUCE Components have origin in the top left of the viewport and the GL origin is in the bottom left of the viewport as mentioned by @PluginPenguin.

If anyone is using a custom OpenGLShaderProgram that uses distance fields in the fragment shader while rendering using a single OpenGLContext that is also used to render multiple other OpenGLRenderers, you must make uniforms of resolution (width, height) and origin (x, y) to use in your fragment shader. The origin should be set to be the bottom left corner of your Component relative to the top level parent Component to which the OpenGLContext is attached. Lastly, in the fragment shader, you must take into account the resolution and position to do the distance field drawing. The change to my fragment shader was to recalculate the pixel position used for distance calculations instead of using raw gl_FragCoord.xy:

uniform vec2 resolution;
uniform vec2 origin;
void main()
{
    vec2 pixelLocal = gl_FragCoord.xy - origin;
    
    // Proceed to do regular distance field calculations using
    // pixelLocal as the pixel position and take into account
    // the resolution if needed for scaling
    // . . .
}
2 Likes