Introducing Blueprint: Build native JUCE interfaces with React.js

The JS code is run through an interpreter, which then makes calls to the underlying C++ code.

1 Like

Yep, @adamski nailed it. Think of it as the JavaScript application instructing your JUCE application how to create and assemble a set of juce::Component instances.

One of my immediate priorities is a very simple GainPlugin example that shows the complete system in a very minimal bit of code. Hopefully that will help explain!

This is a cool project. It’s nice to have a standardised way of laying out Components in JUCE.
Am I correct in thinking that this is purely layout functionaily? I.e do you still have to have concrete JUCE Component classes in your app to actually do any custom drawing?

I guess the React layer also contains your current app’s “visual” state e.g. whether pages are showing etc.

I’m just wondering how you would go from this to having a UI completely declared in JS, including all the drawing. How do web-apps do this sort of thing? Do they use widget toolkits and register callbacks with the JS layer? Or are they all HTML/CSS based with JS manipulating the DOM and CSS?

2 Likes

Great question. Short answer: no, this can do much more than just layout.

The screenshot I have in the first post here is completely done in React: that means that the reactive slider visuals are drawn in JavaScript, all the mouse event handling logic is in js, and the app state is stored in my react application (i.e. as you mentioned, which view is present). The way I did the drawing here is by assembling an SVG in js and then passing that through directly to the ImageView, but I’m not sure yet how scalable that will be, so this is sort of open for improvement still.

Modern React web apps are largely all HTML/CSS/JS where React is manipulating the DOM and there’s very little actual HTML. In our case here, you could consider the JUCE Backend as the “DOM” which represents the actual tree of components rendered and the actual mouse interaction entrypoints, etc. Then you have situations in a web app where you want custom draw functions, in which case you’ll usually get React to mount a <canvas> element for you and then use the Canvas API for custom “paint routines.” This could be achieved here as well by providing some kind of interface through which the js engine can marshal calls through to juce’s Graphics object in the paint callback, or perhaps by pushing opengl scripts over to native (as I said… sort of an open question currently :slight_smile: )

So custom drawing is workable already but will likely see a lot of effort in the short term to find the best approach. In the mean time, you will be able to create custom JUCE Components with custom paint() routines just as you would do normally, and register them with the React environment so that when you render a <MyCustomView> from React, your component will get created and mounted and thus your custom paint routine will run. (In progress, will land shortly– https://github.com/nick-thompson/blueprint/issues/8)

This was another design decision made with the intention of enabling a sort of “gradual” adoption strategy, but at the same time gives you the option to do any intensive custom paint routines in JUCE as usual.

Let me know if that answers your questions!

2 Likes

Thanks for the detailed answer. That certainly helps explain things and as I hoped leaves the system open for a lot of possibilities without tying it to one specific paradigm.

I can see this as being a quick way to get SVG graphics in to controls. It’s similar to something I’ve been meaning to look in to in plain JUCE. I guess I should look at the examples a bit more but if you don’t mind, how do you modify the way SVGs are drawn depending on parameters?
For example, if you have a rotary slider SVG with a filled track, do you provide these as multiple layers or dynamically modify the SVG content in the JS to adjust things like the “filled arc” angle? I could see workflows like this being extremely quick and flexible…


As an aside, I have thought about creating JS bindings for the various juce drawing classes which would give you the ability to effectively do a JUCE_LIVE_CONSTANT paint () method in JS but there’s so many classes involved which can change API (Graphics, Path, Line, Rectangle, Point, AffineTransform, RectanglePlacement, then all the text classes) I thought better of it and hoped for a day when reflection in C++ would allow these bindings to be created automatically.


Really looking forward to seeing more examples of this. Great stuff!

Amazing work ncthom! I can’t wait to try paper.js with blueprint! Did you try any drawing libraries yet such as paper.js, d3.js etc. ?

1 Like

@dave96 Great question. I realized shortly after sharing this that the included example was a little more convoluted than it should be, so I just finished putting together a dead simple gain plugin example which should hopefully better answer exactly these questions:

The meter implementation there starts on the native side with a Timer callback running on the editor that reads atomics from the processor and dispatches an event with the left/right channel peak values to the React app. This Meter component responds to the dispatch by updating its state, which forces a new render() call, and you’ll see my rudimentary drawing example in the renderVectorGraphics method therein. (I know this isn’t a great peak meter, but it hopefully communicates the UI framework effectively)

I’m definitely interested in some kind of bindings to integrate javascript at paint time, but then I also wonder about all of the myriad javascript webgl/opengl libraries and wonder if the better route would be to use one of these existing libraries for generating texture/shader code and sending that over to juce for rendering. It’s unclear still what path we’ll go for Blueprint, but I think we’ll have several options

@alisomay Thanks! I shared your excitement at first, and tried many drawing libraries :smiley: In my drawing example above you’ll see I’m writing SVG by hand. I was sure I would be able to just pull in a js SVG/Drawing library and use a nice API for generating this for me, but I couldn’t find a single implementation that didn’t rely on the DOM underneath, so none would work in our embedded environment. If you know of a library that operates totally in memory and can output an SVG string, then integration should be trivial… and in that case let me know!

5 Likes

Cool project! Do you have any plans for adding TypeScript definitions?

1 Like

@ncthom For example in Paper.js I found “getPathData” method of the Path Item class which is the base class for all path classes I guess (http://paperjs.org/reference/pathitem/). This method returns a path string. You might probably already know this. Since it looks like the library creates objects for shapes and paths then may draw them to a canvas element, maybe with a rework on the source code of the drawing part we can adapt it to blueprint. I will check this subject a bit more :slight_smile:

1 Like

@onqel thanks! No plans yet but I would very happily accept that contribution! It might be a bit of a tricky time as the API is definitely not stable yet, so there will be a lot of flux and frequent updating of the typedefs, but this is definitely a feature I’d be happy to have

@alisomay Hm yea that PathItem might be ok actually. Maybe there’s a particular subset of paperjs that we could pull that never touches the DOM. That would be amazing, I’d love to have that kind of tool available. Please let me know what you find!

2 Likes

Hey people I scraped out the DOM and the Canvas from Paper.js and can use it as a SVG string generator with most of it’s functions! Since I butchered the source code a bit I am still testing all the features one by one. Sharing soon!

4 Likes

@alisomay Awesome!! Very excited to see that

2 Likes


Check it out! Do not forget this had been done quick and dirty with excitement and only a proof of concept :slight_smile:
There is also a version of simple gain plugin there which I used naked-paper.js for a couple of things as a proof of concept.
When I have more time I will try to understand the source code of paper.js deeper so maybe I can adapt it much better.

6 Likes

Diving down this exact same rabbit hole at the moment myself. Have tried two.js to see if it works without the dom but no dice.

Perhaps time to investigate your naked-paper.js approach @alisomay! I’m looking for a js library I can incorporate to draw simple shapes and then convert said shapes to an SVG path string which could be passed either to Blueprint’s existing “border-path” view property or to a new component type which uses Drawable::parseSVGPath and the like in it’s paint routine to draw a collection of Path objects passed.

I’m wondering if this will be cheaper and more flexible the the current ImageView approach which parses a full svg doc and tears down the associated Drawable each time.

@ncthom has there been anymore investigation around this area recently?

I did find https://github.com/andreaferretti/paths-js which I was able to pull in but the drawing support is pretty rudimentary.

I haven’t done much more on this myself, but it is definitely something I’m still interested in. Similarly, I never found a suitably powerful drawing library that operated without the DOM, which is a bummer.

It’s definitely a feasible proposal: a drawing library that represents “draw commands” in its own internal format and then has multiple export formats, such as SVG string. That same library could take a Canvas 2D Context-like object and call its draw commands, which would provide rendering to a web canvas and also we could imagine writing a canvas-like object that just calls through to juce’s graphics routines, and pass that object into the same API. That way you would skip the step of encoding to SVG string and then decoding from SVG string.

Anyway, I think that Canvas-like object with calls through to graphics routines is a good first step, because then you could just draw using that either in web or in blueprint. For example:

class MyCoolKnob extends React.Component {
  draw(ctx) {
    // The DOM Canvas 2D Context API
    ctx.moveTo(50, 50);
    ctx.lineTo(100, 100);
    ctx.stroke();
  }

  render() {
    return (
      <Canvas onDraw={this.draw} />
    );
  }
}

You could imagine writing such a <Canvas> object for the browser that internally renders a <canvas>, captures its drawing context, and then calls this onDraw method in every requestAnimationFrame callback and passes the drawing context to the onDraw callback. Then imagine writing a custom Blueprint Canvas component that, in every paint(Graphics& g) creates a native object in the js engine with a bunch of native methods that just call back to that g, using API that mimics the Context 2D API. Portable rendering :slight_smile:

I think this is probably the more straightforward path to introducing a really valuable draw hook into blueprint.

1 Like

Thanks for the awesome advice @ncthom !

Yeah this certainly sounds like the way to go.

So I can wrap my head around how this might work with ReactApplicationRoot::registerNativeMethod, i.e. I could stash the reference to the Graphics object passed to paint() and bind equivalents for the various CanvasRenderingContext2D API calls.

i.e. ctx.lineTo maps to juce::Path::lineTo which is rendered by the Graphics object held in the stash.

To trigger the Canvas components onDraw callback I guess we would essentially call EcmaScriptEngine::invoke(“drawCanvas”) from paint() after registering the native object/method bindings for the graphics object? i.e. to actually “call-back” to the Graphics instance and render.

I’m not even close to fluent with duktape yet. I’m assuming I can register an actual object with methods rather than calling registerNativeMethod for every Graphics call. I think I can head down the right track with EcmascriptEngine::registerNativeProperty/duktape_push_object and workout how to create an object with the necessary methods via juce::Var and co.

Failing that I think I can see how calling on a “Graphics” target would work based on looking over the approach used for the BlueprintNative calls.

@ncthom if there’s any chance you could give a quick example of registering an object with methods via the EcmaScriptEngine and calling said object in js that would be ruddy marvellous. I can sort of see how some of this works based on the pushVarToDukStack helper in EcmascriptEngine.cpp.

( I realise I’ve promised various pull requests to land on github soon (They are coming!) but I’d be happy to have a crack at a basic implementation of this, not totally sure what the performance implications of registering native methods/objects inside every call to paint() will be like … ).

Yup, you’re totally on point, and the callback bit is a little rough around the edges right now: https://github.com/nick-thompson/blueprint/issues/32 :smiley:

I’ll carve out some time this week to take a quick stab at that issue and follow it up with a little Canvas example! I’ll try to remember to write back here when I do

2 Likes

Awesome sounds great Nick. Let me know if I can assist with anything.

Cheers

@ncthom This gives me an idea.

I’m currently writing a React based slider. It’s a decent amount of work for me to write logic to handle gestures and convert said gestures to values for various slider types (rotary, two-value, linear-vertical, linear-horizontal etc.). Not to mention skew logic, ranges etc.

I’m wondering if another paradigm could emerge where you could override LookAndFeel methods in a class derived from a stock juce component. Users could then have the js context do the drawing whilst keeping the tried and tested juce components in charge of functionality?

i.e. for a slider I might have something like the following:

// A Look and feel class which dispatches look and feel overrides to js
class ReactBasedSliderLookAndFeel : public juce::LookAndFeel_V4
{
public:
    //TODO: Obviously we wouldn't want to provide the full EcmascriptEngine
    //      ref here. We want some intermediary class to delegate registering native functions
    ReactBasedSliderLookAndFeel(blueprint::EcmascriptEngine &engine)
        : engine(engine)
    {
        
    }
    
    void drawRotarySlider (  juce::Graphics &g
                           , int x
                           , int y
                           , int width
                           , int height
                           , float sliderPosProportional
                           , float rotaryStartAngle
                           , float rotaryEndAngle
                           , juce::Slider &slider) override
    {
        // This block of code is entirely wrong and exists just to demonstrate a point.
        // We would require a way via the EcmaScriptEngine class to register a DynamicObject
        // so that functions can be called upon it. This will involve stashing pointer to the DynamicObject
        // etc. etc.
        juce::DynamicObject jsGraphicsContext;
        
        jsGraphicsContext.setMethod("drawRect", [&] (int xPos, int yPos, int rectWidth, int rectHeight) mutable
        {
            g.drawRect(xPos, yPos, rectWidth, rectHeight);
        });
        
        
        engine.registerNativeProperty("GraphicsContext", juce::var(&jsGraphicsContext));
        
        // Somehow invoke drawRotary slider in js which will use the previously
        // registered GraphicsContext object?
        engine.invoke( "drawRotarySlider"
                      , x
                      , y
                      , width
                      , height
                      , sliderPosProportional
                      , rotaryStartAngle
                      , rotaryEndAngle
                      , &slider);
        
        //TODO: Deregister the GraphicsContext property here?
    }
private:
    blueprint::EcmascriptEngine &engine;
};

// Where CONTROLLER here is a SliderListener (Probably the Component class owning the ReactApplictionRoot instance) which uses the ID of the slider
// by doing something akin to the following in it's listener:
//
//   ReactBasedSlider *slider = dyanmic_cast<ReactBasedSlider>
//   if (slider.id == "GainSlider")
       // Do gain stuff
template<typename CONTROLLER>
class ReactBasedSlider : public blueprint::View
{
public:
    ReactBasedSlider()
            : blueprint::View()
    {
        addAndMakeVisible(slider);
    }

    virtual ~ReactBasedSlider() = default;

    void parentHierarchyChanged() override
    {
        if (blueprint::ReactApplicationRoot *root = findParentComponentOfClass<blueprint::ReactApplicationRoot>())
        {
            lookAndFeel = std::make_unique<ReactBasedSliderLookAndFeel>(root->engine);
            slider.setLookAndFeel(lookAndFeel.get());
        }

        if (CONTROLLER *rootController = findParentComponentOfClass<CONTROLLER>())
        {
            slider.addListener(rootController);
        }
    }

    void paint (juce::Graphics& g) override
    {
        blueprint::View::paint(g);
    }

    void resized() override
    {
        blueprint::View::resized();
        slider.setBounds(getLocalBounds());
    }

private:
    juce::Slider slider;

    //TODO: Better way to do this than having separate instance for each slider?
    std::unique_ptr<ReactBasedSliderLookAndFeel> lookAndFeel;
};

This way I don’t need to rewrite slider logic that already exists in the juce codebase but I still get the benefits of hot reloads, react based drawing/styling and flexbox layout. This seems like it could be a powerful way to add a set of reusable components to blueprint for people to get going with quickly.

Things like juce::Slider’s range and skew factor should be settable via props from js. blueprint::View::setProperty is virtual so looks like we could override this in ReactBasedSlider (or BlueprintSlider if the name suits better :-)) and dispatch property changes to the relevant juce::Slider methods. i.e. ‘skew’ prop changes triggers juce::Slider::setSkewFactor().

This does open up interesting discussions around how you would then bind callbacks to a ReactBasedSlider. i.e. add a SliderListener in some way either using a template or constructor arg in the ViewFactory. This listener could use a dynamic cast to ReactBasedSlider and read the slider’s ID which has been set via a js prop.

So I could register my ReactBasedSlider component using your excellent ShadowViewMagic:

appRoot.RegisterViewType("ReactBasedSlider", [] () {
   // Slider with listener typed on owning class of appRoot
   using ReactBasedSliderType = ReactBasedSlider<OwningControllerClass>;
   
   auto v   = make_unique<ReactBasedSliderType>();
   auto sv = make_unique<blueprint::ShadowView>(v.get()); 

   return ViewPair(move(v), move (sv));
});

Then users simply have some js function for overriding drawRotarySlider and friends pulled in at the AppRoot level which uses the Graphics like object registered by the C++ code to do it’s magic.

I’ve no idea if this would all work. I might have a crack at an approach like this tomorrow just to see how it shapes up. It may be that this is simply a design pattern that works better for me and my current project and doesn’t at all align with your plans for the mighty Blueprint!

I think in theory this approach works orthogonally to the current examples in which slider components are written from scratch handling mouse events etc. etc. In my case it could mean less js function binding to handle UI actions/calbacks as I’m working outside of a generic plugin project and am not using a ValueTree backed data model etc.

If this approach works for me I’d happily drop you some contributions for components using this model. I needs sliders, buttons and comboboxes etc and probably don’t have time two write them all from scratch in React.

Again feel free to shout if I can help with the DynamicObject binding stuff or anything.

Well the basics appears to work. I can register my ReactBasedSlider and my SliderListenerCallbacks are all working.

I’ll attempt a little hacky version of the look and feel override just using registerNativeFunction and stashing the Graphics reference in context and let you know how that goes.

I think these two component approaches might be able to live side by side …