Exciting news for those people who’ve been following our progress with SOUL - you can now actually try writing it yourself… in your browser!
The website is https://soul.dev, where we’ve built a WASM/WebAudio based playground where you can write and interact with simple SOUL programs. The playground supports audio in & out, WebMIDI, and can expose properties as event-sources so you can write synths, effects, etc. with live properties.
(the list of examples is hosted in our github repo: SOUL/examples at master · soul-lang/SOUL · GitHub, so if you write something great and would like us to add it as an example, please let us know!)
This is still very early in the SOUL project, and the playground only represents the first step of many products and a much more fully-featured API that we’re aiming to have ready later this year, but it’s fun to tinker with, and we’d love to hear your feedback on the language itself.
The Faust DSP language has a (still experimental) SOUL backend that allows to directly compile Faust DSP source code in SOUL source code. The resulting auto-generated SOUL code can then be used along hand-written SOUL processors or graphs.
For now this is only available in Faust github. A faust2soul script simplifies the Faust DSP => SOUL code compilation process, including the production of MIDI controllable polyphonic instruments.
yeah, just add a member keyword to indicate a function is a member function, maybe?
member void addSlide ( int channel, float slide)
{
for (int i = 0; i < int(activeNotes); ++i)
if (noteArray.at(i).channel == channel)
noteArray.at(i).slide = slide;
if (int(activeNotes) > 0)
currentSlide = noteArray[activeNotes - 1].slide;
}
First up, yes please do suggest features we could include! We’re got a long list of things we’d like to do but i’m sure there are plenty of ideas we’ve not thought about that will make the language easier to use and more flexible.
range-based for is on the list, that’s certainly a useful addition.
having a ShaderToy esque (soultoy?) server to browse and rate other peoples examples would be an amazing learning resource for the language (and maybe kickstart a similar popularity surge as glsl enjoyed from shadertoy). Also looking forward to being able to #include / import other .soul files / processors and graphs to build up to higher level abstractions.
Looking at your code, I see that you’ve stuck a mono->stereo mixer in, and since it’s just hard panned left/right you can get away with a different approach - the convention if you have two mono output streams is that they are left and right, so you can write it more simply like this:
I’m experiencing a bug with the SOUL playground where, when playing two or more notes on any of the synth examples, after a second or so the sound will sort of “pulse” after which only one of the notes is playing.
Most noticeable on the SineSynth example when playing two notes a semitone apart, the expected phasing is audible but after a second only one of the notes is playing.
I’ve experienced this on both my Windows 7 laptop and Windows 10 desktop. Using chrome with both - also tried Edge on Windows 10 but had the same issue. Works fine on my android phone. Using the built-in, default audio device (jack output) on both Windows 7 and 10.
Are you by any chance using the built-in keyboard to trigger notes? If so, the problem is that key repeat tends to cause multiple events to get sent through (so holding down ‘a’ for example sends through multiple note ons to the SOUL code). It’s basically a bit broken, and is only really of use to trigger a few notes for testing. I’d suggest attaching a real midi keyboard for proper noodling.