Building a plugin for an instrument based on the WebAudio API?

Hi everyone,

I am researching how can I build a plugin (VSTi and AU) for my synthesizer Viktor NV-1: http://nicroto.github.io/viktor/.
The NV-1 is built on top of the Web Audio API, so I will need to be able to embed a browser into the VSTi.
I am looking at the Chromium Embedded Framework (https://bitbucket.org/chromiumembedded/cef) and I've posted a question there, but haven't gotten any response, yet.

Has anyone, in the Juce community, done something like that?

Any pointers and guide-lines are much appreciated - I am open to all feedback.

Cheers!

Hi tsenkov,

Did you get anywhere this? Did you manage to get CEF working with JUCE?

Cheers,

Jamie

Hi Jamie,

Unfortunately, I didn’t. Mostly because of these:

  • Web Audio as anything in the Chromium project (at the time) was sprinkled with GPL licensed code, here and there (and I wanted to build something proprietary).
  • The Chromium project couldn’t be developed under Xcode - no code navigation (or worse - navigating to the wrong thing all the time) etc.
  • I talked to the original creator of Web Audio, Chris Rogers, and he raised serious concerns about such an attempt - the Chromium process model is pretty complex to be changed in a way to deliver the samples produced by the browser audio thread to the plugin audio thread (in Chromium I believe it was running in a separate process? and webKit at the time had both single-process model 1.0 API, as well as multi-process model 2.0), the delivery of MIDI messages to the Javascript vm to process, I think there wasn’t any way to guarantee sample-accurate rendering, too.
  • I definitely didn’t know C++ well enough to take on such a huge challenge.

From talking with Chris and other people, I got that it’s probably best if I get Web Audio out of Chromium or WebKit. And then either:

  • use a Javascript engine (and reimplement/reuse the javascript mappings to native calls/objects) and continue writing projects in Javascript but with native UI (effectively removing the browser visual rendering and making the threading model easier);
  • or just use the Web Audio I extracted as a C++ library in native code, where it will be much more predictable to deliver samples on time and make sample accurate renderings.

At this point I kind of gave up… :frowning:

I find WebKit’s coding style to be a lot easier to read, and they actually keep the project usable from Xcode - Chromium guys don’t use an IDE, they just use CLI tooling to develop as much I found out in documentation, IRC channels etc.

But I believe Chromium has a more advanced implementation of Web Audio, with more features and higher quality components (filters sound a lot better in Chrome than Safari in my experience).

If you decide to try and extract Web Audio, I believe Chromium (Blink, which if nothing changed, is inseparable part of Chromium) has the version you want.

If you make any progress - please, let me know. I always meant to return to this project at some point, but I can’t commit when will I have the time to do it.

I hope this is helpful.

Thanks for the detailed reply and report, very useful…

My main interest is in getting CEF working under JUCE, was actually not for audio but for the FIleSystem API support. Basically to experiment with running some legacy web code in a plugin context.

My conclusions so far are similar to yours: that CEF with its multi-process model is non-trivial to incorporate into a JUCE component. I think unless I can see someone has already demonstrated proof-of-concept with this, I’m giving up too. FWIW, there’s a thread on the CEF list about this: http://www.magpcss.org/ceforum/viewtopic.php?f=6&t=11367

Cheers.

Jamie

1 Like

Awesome discussion! I read the related discussions in the Atom forum and the KVR Forum as well.

  • use a Javascript engine (and reimplement/reuse the javascript mappings to native calls/objects) and continue writing projects in Javascript but with native UI (effectively removing the browser visual rendering and making the threading model easier);

Would it be an option to run the Javascript engine (node most likely) for processing the audio + a web view for just the UI? Or would that tax the cpu unnecessarily even more?

For the mapping of the Web Audio API to node, something like the web-audio-api could be used (which unfortunately is incomplete and probably abandoned). The question is then how to connect the output stream of that lib to the VSTi output.

Interesting topic! Would be easier if the DAW was implemented in JS itself…

There were two talks at ADC a few weeks ago on a similar topic:

Great discussion, I’ve been wanting to comment on this, but always got distracted. I’ve been meaning to do this for some ideas I wanted to prototype. Embedded browsers in audio plugins is a breeze for Max/MSP, however I want to do this with JUCE.

If embedded a whole browser is not possible, is there a way to get audio from a web link apart from the usual API route?

Well, I still want to do this but don’t know where to start. So basically I made a sampler that can sample anything on the web through a Max4Live plugin in Max/MSP. However to route audio I used a routing mechanism like soundflower, jack audio.

Now I want to get rid of the dependency. Is it possible to do something like that in JUCE? This could be amazing for samplers and sampling technology.

Here’s a link to the project if you are curious: https://gumroad.com/l/websamplr