Oh god you want to use Javascript as an audio engine, inside of JUCE?
That sounds completely backward. It would probably just be easier to port the DSP code over to C++.
You could then compile that DSP code with Emscripten to convert it to Js, for use in the website. This gives you the added benefit of having JUCEās great audio/dsp classes. Emscripten does support WebAudio and I know a few people on the forums have played with that.
Doing it the other way around just seems perverted.
If you really want to dive into that rabbit hole, you would probably want to start here:
You can load a DLL using the above. Iām assuming your Javascript library would have some entrypoint function that takes an audio buffer and is visible in the DLL. If so, you could call that function within JUCEās processBlock or something similar.
You may be able to run it inside JUCEās own Javascript interpreter, though it would take some work to get it & all dependencies working and is absolutely a terrible idea.
I shudder at the thought of it being used during the audio thread, but it might work depending on how complex the filter is.
But seriously, just port it to C++. You have all the tools youād ever need to do that within JUCE. I just looked at the AudioLib.js library, and JUCE has a replacement for everything in that library and more, plus youād be at native speed.
I did not know that the JavaScript sound engine was that bad. I need to know more about how to port JavaScript to JUCE.
I have been a beginner at programming in C++ since 2007. I have been skilled at programming in JavaScript since 2004.
With AudioLib.js, each module in the FX chain is connected to the next module in the chain with the push() function. I was initially enticed by not having to manually set up an audio buffer between each module in the chain.
I guess it would be simple enough to copy and paste the audio buffer code each time it is needed.
All of the programming I have done with AudioLib.js is for the purpose of FFT analysis, so that my website can display strange signal-based visualizations for music.
One of my ideas is to use FFT analysis of the music to generate control signals for sine wave, triangle wave, and square wave oscillators. This control signal would alter the pitch and amplitude of the oscillators. This could generate interesting visualizations.
Here is a link to some JavaScript FFT code that has not been put to use yet. However, it passes validation with Mozilla Developer.
What do you think of the ideas expressed in the user notes?
I would abandon this idea of getting JavaScript working in C++. The v8 engine or servo will do a better job of converting the JS to runnable code than this JSC.exe or whatever else. Itās a waste of time imo.
If youāre not going to re-write in C++, maybe look at sticking with JavaScript. It looks like Microsoft will have a working react-native solution across multiple platforms soon, so maybe just stay in that domain and use WebAudio with a React UI. You could just use Electron as well.