Building a plugin for an instrument based on the WebAudio API?


#1

Hi everyone,

I am researching how can I build a plugin (VSTi and AU) for my synthesizer Viktor NV-1: http://nicroto.github.io/viktor/.
The NV-1 is built on top of the Web Audio API, so I will need to be able to embed a browser into the VSTi.
I am looking at the Chromium Embedded Framework (https://bitbucket.org/chromiumembedded/cef) and I've posted a question there, but haven't gotten any response, yet.

Has anyone, in the Juce community, done something like that?

Any pointers and guide-lines are much appreciated - I am open to all feedback.

Cheers!


#2

Hi tsenkov,

Did you get anywhere this? Did you manage to get CEF working with JUCE?

Cheers,

Jamie


#3

Hi Jamie,

Unfortunately, I didn’t. Mostly because of these:

  • Web Audio as anything in the Chromium project (at the time) was sprinkled with GPL licensed code, here and there (and I wanted to build something proprietary).
  • The Chromium project couldn’t be developed under Xcode - no code navigation (or worse - navigating to the wrong thing all the time) etc.
  • I talked to the original creator of Web Audio, Chris Rogers, and he raised serious concerns about such an attempt - the Chromium process model is pretty complex to be changed in a way to deliver the samples produced by the browser audio thread to the plugin audio thread (in Chromium I believe it was running in a separate process? and webKit at the time had both single-process model 1.0 API, as well as multi-process model 2.0), the delivery of MIDI messages to the Javascript vm to process, I think there wasn’t any way to guarantee sample-accurate rendering, too.
  • I definitely didn’t know C++ well enough to take on such a huge challenge.

From talking with Chris and other people, I got that it’s probably best if I get Web Audio out of Chromium or WebKit. And then either:

  • use a Javascript engine (and reimplement/reuse the javascript mappings to native calls/objects) and continue writing projects in Javascript but with native UI (effectively removing the browser visual rendering and making the threading model easier);
  • or just use the Web Audio I extracted as a C++ library in native code, where it will be much more predictable to deliver samples on time and make sample accurate renderings.

At this point I kind of gave up… :frowning:

I find WebKit’s coding style to be a lot easier to read, and they actually keep the project usable from Xcode - Chromium guys don’t use an IDE, they just use CLI tooling to develop as much I found out in documentation, IRC channels etc.

But I believe Chromium has a more advanced implementation of Web Audio, with more features and higher quality components (filters sound a lot better in Chrome than Safari in my experience).

If you decide to try and extract Web Audio, I believe Chromium (Blink, which if nothing changed, is inseparable part of Chromium) has the version you want.

If you make any progress - please, let me know. I always meant to return to this project at some point, but I can’t commit when will I have the time to do it.

I hope this is helpful.


#4

Thanks for the detailed reply and report, very useful…

My main interest is in getting CEF working under JUCE, was actually not for audio but for the FIleSystem API support. Basically to experiment with running some legacy web code in a plugin context.

My conclusions so far are similar to yours: that CEF with its multi-process model is non-trivial to incorporate into a JUCE component. I think unless I can see someone has already demonstrated proof-of-concept with this, I’m giving up too. FWIW, there’s a thread on the CEF list about this: http://www.magpcss.org/ceforum/viewtopic.php?f=6&t=11367

Cheers.

Jamie