- live-code SOUL in your browser!


I’m one of the developers of Wire, an audiovisual patching tool based on JUCE.

Must say I’m a huge proponent of the work done on SOUL, and would love to express our interest in the early-adopter/beta program and hosting SOUL in our patching software (just like we’re doing with GLSL). Is there a way for us to enter the program early as mentioned on the github page?


Hi, I am failing to have the soul editor load in chrome, but the editor is fine loading in safari. Where/how should I report this?


Hmm, that’s the first time we’ve heard of any problems in Chrome. Is there something a bit unusual about your system?


Not beyond having a few extensions installed.
It would appear that the ‘Adblock’ chrome extension was somehow causing the editor to hang on loading, and since removing this extension the tool is now working for me.

1 Like

Yeah, the monaco editor is quite a beast!


The graph is failing to load the processors to output suddenly from this morning. I am composing some procedural compositions, thanks to SOUL, and it all worked well untill this morning. Please tell me where the mistake is.
For example, if I want one high and one low ClassicRingtone sequence as I have coded in the link below, the program is only running the last processor in the code.


Yeah, we just pushed out a fix for the way the main graph is chosen. The preferred way now is to mark the processor you want to play with the [[ main ]] annotation, like this:

Otherwise, it’ll now default to the last usable processor declaration in the code


Amazing! Thanks for the update Jules!


really enjoying playing with the language in the web editor. I keep on finding myself stuck when it comes to thinking about the “composition” of reusable processor objects. A graph is the way that you route processors together, but regularly you want to route things together with trivial modifications, such as scaling the output of one processor. Would it be possible to do such things without creating a separate processor for the scaling, for instance in the graph connections? Am i missing something?


By scaling, do you mean applying a gain? That could be done by just passing it through another simple gain processor with a specialisation param?


…sorry, re-read your message more closely. Yeah, we’ve considered adding an optional gain to each connections but didn’t see it as a super-high priority as there are other ways to do it and it’d really be just syntactic sugar. Maybe in a future version :slight_smile:


cool. Here is a port of one of Mick Grierson’s maximillian examples to SOUL. Bonus points if someone can add timpani…


Love it! It’s super-compact too, considering that it’s got a tune in it… I would say “can we add it to our examples” but not sure whether Vangelis would come after us if we did that…


Yeah, love that example, rather cool. Funnily enough I was discussing whether it would be possible to write a complete tune in SOUL just the other week.


tidied up a bit and put it on github

My initial approach was to try and make everything modular, so e.g. an oscillator would be a processor, but that seemed complicated and to make many more sloc, so I went for a more C-like style where state and functions are separated.


Yeah, one of the nice things we’ve found with the language is that it’s good to have the option to write things in either of those styles.


Are there any plans to support SIMD style datatypes? e.g. float2 or float4? So e.g. we could write code that reads from two mono sources (left and right) into a float2 and then all mathematical operations on those are automatically SIMD where the hardware allows for it?

I think this would help the compiler a lot, as it’s basically spoon fed a possible optimization.

If the hardware doesn’t support SIMD-style operations, then you can simply do them one at a time. No loss.

Even the best compilers can’t detect the intent and automatically generate fast code. We’ve created our own SIMD class with those datatypes and rewrote tiny portions of some DSP code and the overall performance increase was up to 70% faster than trusting the compiler. We measured hotspots and only optimized the worst offenders that way. The rest still uses regular floats and processes left and right separately, as for those cases we are memory-latency bound.

Even instances where we thought it wouldn’t matter (e.g. high quality sample-playback) we measured very nice improvements.


I’m a bit surprised you’re asking because it’s something that’s been a part of the language all along, it’s in the guide, and lots of the examples demonstrate it!

And yes, the LLVM vectoriser does a great job of turning them into SIMD operations!

1 Like

Oof. Sorry, should have scrolled down. I saw the primitive types and missed them.


On an unrelated note: that link you posted isn’t clickable in Chrome. If I copy and paste the link into the address bar, it works just fine, it’s just not clickable. Weird.

1 Like