Hello all - first post here.
I have a webapp which composes music algorithmically. It can play the composed tracks in the browser. However, obvious problem is that the sound quality of webaudio is not as good as native.
My question is how feasible is it to write a native audio layer with Juce and package it with my js app with it via electron so I can deliver a desktop app that looks and composes the same as my current app, but has high quality sound?
But it doesn’t go into the details obviously.
That is doable, I used this setup for a client, and the app is published 4 years already and used very often (>180k downloads and ~5k users).
However, you will always pay a price in terms of synchronicity between webapp and audio engine. If you can avoid this friction by staying in one system, it’s worth it.
Btw. I was listening in at that workshop (not actively participating…) maybe @lukasz.k has some materials left?
His approach was even more sophisticated than mine, I simply spawn a new process and communicate via InterProcessConnection, which is relatively easy to connect to from electron, just check out how the magic header has to look to be read by juce.
Thanks for your reply!
However, you will always pay a price in terms of synchronicity between webapp and
audio engine. If you can avoid this friction by staying in one system, it’s worth it.
Sure. At the moment I have MIDI out from my webapp, so I can get it to talk to my DAW, and i can tweak the display latency in my app to make it appear as though the DAW sound and my graphics are totally in sync.
I assume there would be a similar kind of latency with what you’re suggesting, so i think there will always be ways to mask it.
The reason i’m looking at this option is obviously i would prefer the user to not have to link up a DAW and configure the midi channels just to get better sounds though.