Introducing Blueprint: Build native JUCE interfaces with React.js

Hey all,

I’m excited to share a project I’ve been working on for a while now!

Blueprint is a JUCE module for building native JUCE user interfaces using React.js. Think React Native, but instead of rendering to native platform components, Blueprint renders to plain old juce::Component instances. That means that you can write your plugin or app UI in React.js, but ship an interface that’s just JUCE, and still use all the familiar features of JUCE that you’re used to.

There’s a lot to get into to explain how this works, but let me first start by sharing an example that I’m really excited about. My next plugin is now in beta, and the UI is completely written in React, including all the sliders, buttons, preset browser, etc. Screenshot below.

There’s more to say than I want to fit in this one post so here are some links:

Lastly, this project is still super young, and there’s a lot to do before it’s really stable or complete. Actual documentation will be coming soon, and I’d love to have your help, so if you want to get involved check out the Github issues or feel free to suggest ideas/feedback.

Let me know what you think!

28 Likes

Very cool indeed. I had a similar idea a while back, but never got round to starting it. Looks like you’ve done a very thorough job here. Looking forward to trying it in my next project!

1 Like

Funny, we were just discussing this. So glad you took the initiative to actually do something like this and you seem rather motivated as well to see it through to completion. It looks promising and seems as if React Native is the way things have seem to be going.

Great Job!

1 Like

Very cool. The UI looks great too. It’s nice to see an implementation of this idea which has been mentioned a few times. You deserve kudos for that. I’ve never heard of that React Reconciler package either.

Any reason for picking that JS engine? It’s small and embeddable but I’d say it’s pretty slow. I think ChakraCore is nice, fast and lightweight compared to V8.


Microsoft are kind of doing something similar with their Windows App platform. They had a version written in C# but it’s being re-written in C++ now. It will probably be their main cross platform UI toolkit someday.

2 Likes

I would just like to clarify: This is not React Native, this is React.js on top of JUCE. React Native is React.js on top of native iOS and Android.

(The following rant is coming from my experience combining React Native with JUCE)

It’s also important to keep in mind caveats when dealing with many layers of javascript making native calls and responding to native events. It can make debugging very difficult, and can massively increase the complexity of a project. A “vanilla” React Native project (e.g. using Expo) hides much of this complexity from the user, but it is still there, and if you want to dive down into the native layers (as is often necessary for audio and custom GUI), you will have to navigate that complexity. This is especially true on Android where you have to jump from C++ to Java to Javascript and back…

4 Likes

Thanks guys!

I think there was a question earlier about whether or not this solves any existing rendering issues with JUCE– the short there is no, this framework, under the hood, just uses exactly what JUCE already provides by way of the juce::Component class. And actually I think that’s a huge win for my project. It’s sort of a “standing on the shoulders of giants” situation: JUCE surely has some rendering bugs and performance questions but what it provides in terms of drawing an interface cross platform is outstanding, and being able to leverage that in my project means that I can now write React.js and ship to MacOS desktop, Windows desktop, iOS, Android, and everywhere else that JUCE runs. And all of the future improvements that come in JUCE’s rendering approach are free upgrades for Blueprint. I’m really excited about all that.

Re: the JS engine choice– my criteria were basically that it should be small, very easy to embed, have ES5+ compliance, have a simple JS/C++ interface, and be fast enough (in terms of performance). Duktape definitely fit the bill there and so I haven’t looked back since integrating it. One of my immediate todos for the project though is to refactor the JS/C++ integration there, which will put all the Duktape touchpoints behind a single class interface. With that done, it would be relatively trivial to swap in a new engine and make a comparison. Imo this should all be relatively hidden to the end user anyway, so it’s more an implementation detail

And to @adamski yep, thanks for the clarification. I agree that introducing js can introduce complexity, especially in debugging, but I think it’s just a tradeoff to consider. For me the complexity that React.js eliminates in terms of building a UI is worth the complexity introduced by navigating between JS/C++. I also think it’s worth saying that React Native feels a little bit like its taking over your project. One of the design decisions that I’m quite happy with in Blueprint is that your project is still just JUCE, and you can introduce Blueprint wherever you want (i.e. maybe your whole UI is C++ except for your help screen). So ideally it should feel very non-invasive and allow you to choose how much debugging complexity you want to take on

6 Likes

Congrats @ncthom on shipping this. It looks like a really cool piece of work and I look forward to checking it out.

1 Like

I’m trying to understand exactly how this works, so that I can decide whether to invest time into learning React. I don’t know React or JS, but I’ve been thinking about how to diversify my approach to programming interfaces.

If I use Blueprint, would it generate C++ and JUCE code that I could then edit and compile myself? Or would it be scripted inside a JS interpreter, with the JUCE code somehow hidden from view?

Sorry if this question is very basic–my only programming experience is with JUCE and C++, so I don’t yet have any intuitions for web tech platforms.

The JS code is run through an interpreter, which then makes calls to the underlying C++ code.

1 Like

Yep, @adamski nailed it. Think of it as the JavaScript application instructing your JUCE application how to create and assemble a set of juce::Component instances.

One of my immediate priorities is a very simple GainPlugin example that shows the complete system in a very minimal bit of code. Hopefully that will help explain!

This is a cool project. It’s nice to have a standardised way of laying out Components in JUCE.
Am I correct in thinking that this is purely layout functionaily? I.e do you still have to have concrete JUCE Component classes in your app to actually do any custom drawing?

I guess the React layer also contains your current app’s “visual” state e.g. whether pages are showing etc.

I’m just wondering how you would go from this to having a UI completely declared in JS, including all the drawing. How do web-apps do this sort of thing? Do they use widget toolkits and register callbacks with the JS layer? Or are they all HTML/CSS based with JS manipulating the DOM and CSS?

2 Likes

Great question. Short answer: no, this can do much more than just layout.

The screenshot I have in the first post here is completely done in React: that means that the reactive slider visuals are drawn in JavaScript, all the mouse event handling logic is in js, and the app state is stored in my react application (i.e. as you mentioned, which view is present). The way I did the drawing here is by assembling an SVG in js and then passing that through directly to the ImageView, but I’m not sure yet how scalable that will be, so this is sort of open for improvement still.

Modern React web apps are largely all HTML/CSS/JS where React is manipulating the DOM and there’s very little actual HTML. In our case here, you could consider the JUCE Backend as the “DOM” which represents the actual tree of components rendered and the actual mouse interaction entrypoints, etc. Then you have situations in a web app where you want custom draw functions, in which case you’ll usually get React to mount a <canvas> element for you and then use the Canvas API for custom “paint routines.” This could be achieved here as well by providing some kind of interface through which the js engine can marshal calls through to juce’s Graphics object in the paint callback, or perhaps by pushing opengl scripts over to native (as I said… sort of an open question currently :slight_smile: )

So custom drawing is workable already but will likely see a lot of effort in the short term to find the best approach. In the mean time, you will be able to create custom JUCE Components with custom paint() routines just as you would do normally, and register them with the React environment so that when you render a <MyCustomView> from React, your component will get created and mounted and thus your custom paint routine will run. (In progress, will land shortly– https://github.com/nick-thompson/blueprint/issues/8)

This was another design decision made with the intention of enabling a sort of “gradual” adoption strategy, but at the same time gives you the option to do any intensive custom paint routines in JUCE as usual.

Let me know if that answers your questions!

2 Likes

Thanks for the detailed answer. That certainly helps explain things and as I hoped leaves the system open for a lot of possibilities without tying it to one specific paradigm.

I can see this as being a quick way to get SVG graphics in to controls. It’s similar to something I’ve been meaning to look in to in plain JUCE. I guess I should look at the examples a bit more but if you don’t mind, how do you modify the way SVGs are drawn depending on parameters?
For example, if you have a rotary slider SVG with a filled track, do you provide these as multiple layers or dynamically modify the SVG content in the JS to adjust things like the “filled arc” angle? I could see workflows like this being extremely quick and flexible…


As an aside, I have thought about creating JS bindings for the various juce drawing classes which would give you the ability to effectively do a JUCE_LIVE_CONSTANT paint () method in JS but there’s so many classes involved which can change API (Graphics, Path, Line, Rectangle, Point, AffineTransform, RectanglePlacement, then all the text classes) I thought better of it and hoped for a day when reflection in C++ would allow these bindings to be created automatically.


Really looking forward to seeing more examples of this. Great stuff!

Amazing work ncthom! I can’t wait to try paper.js with blueprint! Did you try any drawing libraries yet such as paper.js, d3.js etc. ?

1 Like

@dave96 Great question. I realized shortly after sharing this that the included example was a little more convoluted than it should be, so I just finished putting together a dead simple gain plugin example which should hopefully better answer exactly these questions:

The meter implementation there starts on the native side with a Timer callback running on the editor that reads atomics from the processor and dispatches an event with the left/right channel peak values to the React app. This Meter component responds to the dispatch by updating its state, which forces a new render() call, and you’ll see my rudimentary drawing example in the renderVectorGraphics method therein. (I know this isn’t a great peak meter, but it hopefully communicates the UI framework effectively)

I’m definitely interested in some kind of bindings to integrate javascript at paint time, but then I also wonder about all of the myriad javascript webgl/opengl libraries and wonder if the better route would be to use one of these existing libraries for generating texture/shader code and sending that over to juce for rendering. It’s unclear still what path we’ll go for Blueprint, but I think we’ll have several options

@alisomay Thanks! I shared your excitement at first, and tried many drawing libraries :smiley: In my drawing example above you’ll see I’m writing SVG by hand. I was sure I would be able to just pull in a js SVG/Drawing library and use a nice API for generating this for me, but I couldn’t find a single implementation that didn’t rely on the DOM underneath, so none would work in our embedded environment. If you know of a library that operates totally in memory and can output an SVG string, then integration should be trivial… and in that case let me know!

4 Likes

Cool project! Do you have any plans for adding TypeScript definitions?

1 Like

@ncthom For example in Paper.js I found “getPathData” method of the Path Item class which is the base class for all path classes I guess (http://paperjs.org/reference/pathitem/). This method returns a path string. You might probably already know this. Since it looks like the library creates objects for shapes and paths then may draw them to a canvas element, maybe with a rework on the source code of the drawing part we can adapt it to blueprint. I will check this subject a bit more :slight_smile:

1 Like

@onqel thanks! No plans yet but I would very happily accept that contribution! It might be a bit of a tricky time as the API is definitely not stable yet, so there will be a lot of flux and frequent updating of the typedefs, but this is definitely a feature I’d be happy to have

@alisomay Hm yea that PathItem might be ok actually. Maybe there’s a particular subset of paperjs that we could pull that never touches the DOM. That would be amazing, I’d love to have that kind of tool available. Please let me know what you find!

2 Likes

Hey people I scraped out the DOM and the Canvas from Paper.js and can use it as a SVG string generator with most of it’s functions! Since I butchered the source code a bit I am still testing all the features one by one. Sharing soon!

4 Likes

@alisomay Awesome!! Very excited to see that

2 Likes