Using Framebuffer's as texture inputs for shaders

Hello,

I think I’m fairly close to getting framebuffers working as an input texture on a shader. I’ve definitely got two shaders working in parallel, as shown in the image, the mouse is drawing a red circle in one shader, and the 2nd shader is colouring the background green, it’s composited on top, so that’s why the mouse is now yellow.

I’m trying to feed the framebuffer into the shader via

glBindTexture(GL_TEXTURE_2D, openGLFrameBuffer.getFrameBufferID());
GLint id = openGLFrameBuffer.getTextureID();
shaderProgram->setUniform("texture01", id);

But I don’t think I’m doing it right. I’ve made a simple little paint shader here that I’m trying to port for debugging: https://www.shadertoy.com/view/3lfczf

and here’s the full code in the render loop if anyone can spot what I’m doing wrong:

void OpenGLComponent::renderOpenGL()
{
jassert (OpenGLHelpers::isContextActive());
// Scale viewport
const float renderingScale = (float) openGLContext.getRenderingScale();
glViewport (0, 0, roundToInt (renderingScale * getWidth()), roundToInt (renderingScale * getHeight()));

// Set background color
OpenGLHelpers::clear (getLookAndFeel().findColour (ResizableWindow::backgroundColourId));

////////////////////////////////////////////
// Select shader program
openGLFrameBuffer.makeCurrentRenderingTarget();

shaderProgram->use();

shaderProgram->setUniform("resolution", (renderingScale * getWidth()) ,(renderingScale * getHeight()));

glBindTexture(GL_TEXTURE_2D, openGLFrameBuffer.getTextureID()); // or openGLFrameBuffer.getFrameBufferID()? do they work the same?

GLint idb = openGLFrameBuffer.getTextureID();
shaderProgram->setUniform("texture01", idb); // this is setting fbo as the uniform texture01

openGLFrameBuffer.releaseAsRenderingTarget();

openGLContext.extensions.glBindVertexArray(VAO);
glDrawArrays (GL_TRIANGLES, 0, (int) vertices.size());
openGLContext.extensions.glBindVertexArray (0);

////////////////////////////////////////////
// Do 2nd pass
openGLFrameBuffer.makeCurrentRenderingTarget();

bufferAProgram->use();
glEnable(GL_DEPTH_TEST);

glBindTexture(GL_TEXTURE_2D, openGLFrameBuffer.getTextureID());
bufferAProgram->setUniform("texture01", 0); // What do I need to do to get the texture in here? is it another .bind() ??
// how to get this back into shader A as texture 01, and store pixels, do I need to render to an OpenGL texture?
bufferAProgram->setUniform("resolution", (renderingScale * getWidth()) ,(renderingScale * getHeight()));

//clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// Draw Verts for the screen quad, triangle array defined in ShapeVertices.cpp
openGLContext.extensions.glBindVertexArray(VAO);
glDrawArrays (GL_TRIANGLES, 0, (int) vertices.size());
openGLContext.extensions.glBindVertexArray (0);

openGLFrameBuffer.releaseAsRenderingTarget();
glDisable(GL_DEPTH_TEST);
// clear all relevant buffers
//glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
//glClear(GL_COLOR_BUFFER_BIT);
}

Can anyone spot what’s happening or going wrong? I suspect it’s not setting the right texture into the shader program.

If anyone has any thoughts I’d be greatful @bzhkl I started a new thread instead of bombarding yours, apologies!

1 Like

Am I getting any closer with instead by instead of saying glBindTexture(GL_Texture) writing:

OpenGLTexture bufferTexture;

glBindTexture(bufferTexture.getTextureID(), openGLFrameBuffer.getTextureID());
bufferTexture.bind();
bufferAProgram->setUniform("texture01", 0); // Do I need to set this to the address of bufferTex?

Not entirely sure about this process, feels like as soon as you would start the makeCurrentRenderTarget you’d wipe the contents of the buffer before reading it in as a uniform (variable for shaders), that’s my intuition from looking at my code.

I think maybe it has to be separate read/write operations, I just can’t quite get my head around it.

1 Like

any update on progress with using OpenGLFrameBuffer as uniform texture input to fragment shader?

I can upload what I had to git hub.

Got an openGl context and was able to feed two shaders.

You need to ping pong between two textures, and swap the shaders at the end of the OpenGL loop, and prevent from clearing what’s drawn into the first FBO.

I got 98% of the way there, if it makes your life easier I can upload what I have to github.

have a poke around in here: GitHub - ArcAudio/ScrewDriver_MadeOfJelly

there’s also a package called OpenGL paint where I’d got a OpenGL renderer working outside of Juce to make sure I had the right calls to Open GL. So there’s a wave equation solving shader that ping pongs in there make help you understand the calls/workflow.

Honestly I think I ditched Juce just because I was banging my head with OpenGL for a while and not having much fun.

One shader is fine, two shaders I still think I’m missing a few neurons from that process, if you work it out let me know. I can help with the theory, but I found debugging openGL inside Juce ummm, an interesting experience!

Apologies for posting a few times.

So the way I got there, was using these two resources:

GitHub - TimArt/JUCE-OpenGL-Template: Template project for rendering 3D graphics using OpenGL in a JUCE application. Includes useful utility classes in Source/OpenGLUtil (NON plugin OpenGL in juce, needs to be ported to plugin via next method…)

That should let you re-create what I’ve got above in Jelly Screwdriver, named because it’s utterly useless without this context.

The package also mixes faust into the deal, so just ignore the audio stuff.