FR: JUCE Vulkan

22 votes in under a week!
If that isn’t making the JUCE team have a serious look at this, I don’t know what else should be needed :smiley:

We are already working on improving our accelerated graphics rendering.

Vulkan, on its own, is unlikely to be the way to go. It’s a real pain to use on Apple devices, and Windows and Linux driver support is extremely patchy.

Of all the solutions mentioned in this thread Dawn is the most interesting. Skia also has some merits, but it’s a little more limited - shaders are supported, but you miss out on finer grained control over exactly what data you’re sending to the GPU.

Compiling Skia or Dawn involves hundreds of source files, Google’s build system, quite a modern compiler, and a long wait (even with a very powerful computer).

14 Likes

here is a project where you can play with a coloured triangle using a prebuilt dawn dylib https://github.com/cwoffenden/hello-webgpu. webgpu seems like the way to me

3 Likes

Sounds promising. So, how do you plan to integrate Skia or Dawn if they are this big? Static libs and or dynamic libraries or just strip out all the unnecessary stuff and optimize it for a JUCE module?

If it’s only about replacing the OpenGLGraphicsContext I would even say it’s an advantage that Skia hides all the details about the GPU and lower level APIs. It could avoid a lot of hassle with the Path and Image rasterization problem the current OpenGL implementation faces, since it’s most probably already implemented in an efficient way.
With Dawn on the other hand you still have to worry about implementing the actual LowLevelGraphics context.

Isn’t MoltenVK the iOS version of Vulkan?
(I do hate all these names)

1 Like

Yes. But don’t worry, you get overwhelmed anyway :laughing:

About the C++ API. I almost overlooked it, but definitely don’t miss the samples 01_InitInstance to 15_DrawCube at https://github.com/KhronosGroup/Vulkan-Hpp/tree/master/samples
together with the utils at https://github.com/KhronosGroup/Vulkan-Hpp/blob/master/samples/utils/utils.hpp

They are really helpful to quickly get used to the way of Vulkan C++. It’s just so much easier to drop in some util functions to initialize the stuff instead of getting your head around hundred lines of struct initializations.


Anyway, want to share this nice find.

Apparently Dustin Land from idSoftware ported the legacy OpenGL renderer of DOOM 3 to Vulkan.
He writes about the process in a small blog series at https://www.fasterthan.life/blog/
There is also a GDC presentation from 2018 at that summarizes this process.

Here’s an encouraging quote:

One thing people frequently marvel at is the amount of code Vulkan requires you to write. I often hear ( and have said ) “It takes 1,000 lines of code just to set things up.” This is absolutely correct. But!!! Don’t dismay. The impression is that, “man, if it’s this difficult to set things up, how difficult is it to do any rendering?” But this isn’t the right impression to have.

That owes owes to Vulkan’s philosophy in rendering and that is to eat as much of the cost up front as you can, so nothing is in your way when it comes time to draw. This is perhaps the biggest difference between Vulkan and OpenGL. Vulkan works hard before the party so it can let loose later. OpenGL is running around worried that everyone is having a good time.

So you actually do spend a lot of code initializing stuff, but this complicated setup will result in a main loop that is mostly about submitting draw commands without worrying about the pipeline and state too much. The complex stuff is shifted to the beginning.

All in all it took about 5k lines of Vulkan specific code to do the port in about 4 full time months, without prior knowlege of Vulkan. Windows only. Sounds amazing!

Now if I had to guess, looking at the juce_opengl module, my estimate now would be that it takes about 1 month to create a fully working juce_vulkan module.
Depending on the prior knowledge of graphics and JUCE internals it could be done even quicker.

I think the main uncertainty is how you actually would use Vulkan to efficently get juce::Path on the screen. Images are no problem. But paths… not sure. Hm.

4 Likes

I think the main uncertainty is how you actually would use Vulkan to efficently get juce::Path on the screen. Images are no problem. But paths… not sure. Hm.

If geometry shaders are supported then it would be a matter of storing the path into a small texture then creating the polygon data in the shader from that. It should be almost instant.

Right. Almost forgot about them. Although, I had the impression that geometry shaders were and still are somewhat obscure and rarely supported by the majority. And if, they are not very perfomant. Do you know anything about that?

Any alternative ideas about how to implement line drawing? I know you shouldn’t use something like ```
GL_LINE_STRIP, since the anti aliasing and what not is implementation specific. But there are some solutions like Drawing nearly perfect 2D line segments in OpenGL, or Drawing antialiased lines with OpenGL.
Is there a reason why this was never considered for JUCE paths? Or is it just to complex for it’s purpose?


I also looked into the SKIA API here Canvas Creation and found this sentence a bit disappointing:

[…] all SkSurfaces that will be rendered to using the same OpenGL context or Vulkan device should share a GrContext. Skia does not create a OpenGL context or Vulkan device for you. In OpenGL mode it also assumes that the correct OpenGL context has been made current to the current thread when Skia calls are made.

So what does this huge library help if we still have to create the low level stuff, which will essentially result in the same wglMakeCurrent and SwapBuffers problem for OpenGL and the same setup procedure as using Vulkan directly?

parawave
The only way to find out is to try it…:grin:
There are a few ways to draw lines, I was just suggesting one. It can also be done entirely in a fragment shader. Here’s a SVG test in a shader: https://www.shadertoy.com/view/llySWc
I don’t think drawing smooth lines on GPU is a problem these days.

What I’m intrigued to know, is whether the multi-threaded rendering calls will work more smoothly with the host DAW, or not.

Got a working triangle in a JUCE window. Never was so happy to see a triangle!
Enlightenment slowly approaches :upside_down_face:

I can’t say for sure, still getting used to it, but to me it seems like that:
In OpenGL (on Windows) you essetially have to use wglMakeCurrent and SwapBuffers to display your framebuffer to a surface. Especially the “make current” seems to be very VERY heavy operation. And since it is bound to a thread, multithreading on one GL context is just impossible.

In Vulkan you realize this by explicitly initalizing a swap chain with the VK_KHR_swapchain device extension. This gives you much more control over the state and how and when you submit your commands into the pipeline. The acces to Vulkan is not limited to a “current” context or thread. This fact alone seems to be a big improvement in a DAW environment. Additionally to that you could essentially “record” and cache your commands and submit them even in multiple threads. Not that it’s really necessary for simple 2D graphics.

Still not sure what’s the best way to implement it in JUCE. I will probably try the obvious - Record commands in the juce paint functions and just submit them. For now I don’t see a reason to introduce another thread, which will just complicate things due to the necessary MessageManager locking.

I will try to get the triangle example working in multiple plugin windows and see how it performs with multiple instances in the same process. If they are not intefering each other, one could essentially strip out everything wgl related and just directly port the OpenGLGraphicsContext using the message thread.

2 Likes

Wow man, nice work! If you can render one tri then you can render a million!

I remember that moment with OpenGL, it is surprisingly exciting. :grinning:

What I meant before was, I’m mainly interested in how it integrates in with other host render calls, and that all seems promising so far.

This is using straight Vulkan?
Does it render before the Juce paint commands, like the OpenGL functionality, and did you use the same route? It would make the transition easier if you did.
Over all, this is great news, and I look forward to further developments.

Will you share your code at some point? If not, it’s just good to know things are looking up for plug-in rendering again!

Qt 6 got released with a shiny new RHI. Would love to see this in juce too, but not in 2-3 years…

2 Likes

Hey parawave , did you get any further with this? I’m intrigued to know how it went.
Thanks,
Dave H.

Me too. Indeed I’d be happy to make myself useful if there’s anything within my competence. Which does not include writing Vulkan code.

Yes, actually still working on it. The first triangle at that time was embedded into the vertex shader. Everything was spaghetti copy/paste tutorial code. Commands were pre-recorded and window resize or minimize freezed the process.

Now there’s still just a triangle, but the most obvious problems are gone. Mostly things related to the windowing and swap chain recreation. The bulk of initialization code got wrapped in additional classes with more error checking and asserts. Initially it wasn’t that obvious how to structure things, but now the relation is more clear. Instead of hundreds of structs it’s just something like this.

    nativeContext.reset (new VulkanNativeContext(*this, component));
    physicalDevice.reset(new VulkanPhysicalDevice());
    surface.reset(new VulkanSurface(*nativeContext));
    device.reset(new VulkanDevice(*physicalDevice, *surface));

    device->setShaderModule("tutorial2.vert", new VulkanShaderModule(*device, sharedShaders->vertSPV));
    device->setShaderModule("tutorial2.frag", new VulkanShaderModule(*device, sharedShaders->fragSPV));

    swapChain.reset(new VulkanSwapChain(*device)); 
    renderer.reset(new VulkanRenderer(*swapChain));

Initially I tried to replicate everything in juce_opengl\native\juce_OpenGL_win32.h and OpenGLContext. A lot of it was unnecessary and mostly to overcome flaws in the OpenGL architecture. To make things simpler I ripped out the background thread stuff and now all calls are made in the message thread via a normal juce::Timer. Works fine. A single frame render now looks like this:

    void drawFrame() 
    {
    	const auto& swapChain = owner.getSwapChain();

    	auto& frame = *frames[static_cast<int>(currentFrameIndex)];

    	frame.wait();
    	frame.acquire();

    	// Record and Submit
    	{
    		const auto swapChainImageIndex = frame.getSwapChainImageIndex();
    		auto& cmds = *renderCommands[swapChainImageIndex];

    		const auto& frameBuffer = renderBuffers->getFrameBuffer(swapChainImageIndex);

    		cmds.reset();

    		cmds.begin();
    		
    			cmds.resetViewport();
    			cmds.resetScissor();
    		
    			cmds.beginRenderPass(*renderPass, frameBuffer);

    			cmds.bindPipeline(*pipeline);
    			cmds.bindVertexBuffer(vertexBuffer.getBuffer());

    			cmds.record();

    			cmds.endRenderPass();
    		
    		cmds.end();

    		cmds.submit(frame.getAquiredSemaphore(), frame.getRenderedSemaphore(), frame.getRenderFence());
    	}
    	
    	frame.present();
    } 

At this point it directly renders to the acquired frame. The goal is to replicate the things that happen in OpenGLGraphicsContext and pass a LowLevelGraphics context to the regular paint method.
This needs a few more wrapper classes and additional state management. Stuff like Graphics … g.beginTransparency() render into additional frame buffers (in OpenGL). Have to figure that out. Additionally, you can’t just change shaders like one did with glUseProgram. It’s actually necessary to create and bind a Pipeline for each different shader. A bit complicated but doable. In the end it will work just like the OpenGLContext.

    VulkanContext vkContext;
    vkContext.attachTo(component);

I also created a test plugin and opening multiple instances don’t interfere with each other, like the wgl SwapBuffers() does. I guess since they all create their own VulkanDevice and use their own command buffers.

It’s very cool that you can just record various commands and decide at what point you submit them. This makes the state managment much easier and less error-prone than OpenGL and will possibly allow more complex drawing methods. You know, things like drawing an image with a blur shader, or post processing.


The current state? I just added a VertexBuffer helper class. Next things are IndexBuffer. Shader Uniforms. Texture Creation and the FrameBuffer stuff. Then it’s just a matter of puzzling things together to get a VulkanGraphicsContext. The goal is to get a LowLevelGraphics context that gives the same results as the OpenGL one. The efficient path rendering could probably be optimized at a later point. Adding compute shaders into the pipeline could also be an interesting addition.

6 Likes

Excellent work man!! :sunglasses: I just wanted to know if you had it working at all, but you’ve done a load of wrapping as well!
It would be great if we could share the task, as a community of developers, but it’s good enough for me to know it’s even working, TBH. I don’t know if the JUCE team are interested at all.

I suspect the JUCE team are very interested but exceedingly resource-limited!
This sounds really impressive: if/when you feel the time is right please do let us know what we might be able to do to help.

I think so too and fully understand why they don’t supply more information about their process and roadmap in that regard. Seeing all the Vulkan concepts involved, it would take a full time developer that purely focuses on it to deliver the usual JUCE quality, ease of use and coding standards with full documentation. And even if it’s done right, the problem on Apple platforms remains. It’s somewhat possible with Molten VK, but who knows what Apple restricts in the future to favor their own API. It would be a typical move. But I think even if they restrict it, offering a pure Vulkan module is still worthwhile, just because on Windows, Linux and especially Android, Vulkan is the way to go.

Anyway, since Jules is purely focused on SOUL, judging by commits, they have enough to do with the usual bookkeeping work just to keep JUCE alive. A pity but understandable.

I consider sharing the result as soon as it’s in a noteworthy state. Currently focusing purely on Windows and could use some help with the surface implementation and testing on other platforms (at a later point). Since it’s in the prototype phase, the software architecture is highly debatable, so I avoid releasing a construction side (that will definitely change quite a bit) : )

Edit:
Seeing a quad now. Oh god, DescriptorSets and the process for shader uniforms are kind of annoying :grimacing:

Noteworthy state? You mean we can’t see it until you’re happy with it? An interesting view of sharing development, but I completely understand.

1 Like

if you open up a PR you could get a lot of help from people. I could for example look into macOS support as i’ve already worked with MoltenVK in the past year.

1 Like