I’m pretty excited about this project. As I was perusing the language documentation, I could see where you’re borrowing heavily from other languages and also introducing some of your own constructs. As good as the language itself is, I couldn’t help feeling that you might see faster adoption if developers were writing in the languages they already know. Maybe there’s an opportunity for language bindings to be written on top of SOUL?

I could see some potential barriers to that approach - JavaScript has no int type, for example - but with the right API, those shortcomings could be worked around.

As an aside, many JavaScript devs would have no idea why you would use an int vs a float (as just one example) so if spreading audio programming to the masses is one of the project aims, it might be worthwhile to consider some syntactic sugar as part of the recipe. An API could help guide devs to make the right choices.


I think you may have it a bit backwards, SOUL is a language akin to OpenGL’s GLSL. In OpenGL you write your GLSL code and pass it off (as an actual string) to a driver via an API that is language specific, like the OpenGL C language API or WebGL for JavaScript. The driver will then compile the GLSL source and run the shaders.

SOUL will be the same way, where you provide a driver with SOUL source and it does the actual compiling of the code. You can’t provide language bindings over that because then the driver (or some intermediate step before the driver) must provide a C++ compiler, JavaScript interpreter, etc. to transpile the source into SOUL.

To extend your GLSL analogy, libraries such as these exist: https://medium.com/@pailhead011/writing-glsl-with-three-js-part-1-1acb1de77e5c - in essence, a JavaScript API that renders GLSL.

One practical approach to do what I’m suggesting would be to write libraries in various popular languages that have friendly APIs (preferably following the norms of the language) and which simply generate SOUL under the hood. There could be a better way to hook into the system - SOUL uses an LLVM-based JIT compiler, if I recall correctly, so some abstraction might be available at that level - but at this point I’m not really proposing a specific technical approach, I’m providing feedback and offering my help.

Incidentally, this thread was moved from https://github.com/soul-lang/SOUL/issues/10#event-2345430948 on Julian’s suggestion.

Ah, I see! Considering SOUL is rather object-like in syntax and JUCE provides DSP types already, I could see something there for constructing SOUL that way at a high level. Might be a bit harder when you need to start writing your own function routines / logic, like the three.js example at the end where they pass custom GLSL

I think there are a few ways that we are tackling the sort of thing you are talking about.

I’d like to discuss the idea that making the language similar to something like javascript would help with adoption. The thing is, DSP programming is hard, and it’s not something that many people are able, or frankly, interested in doing.

By making the language basically C, the intention was to make it as familiar as possible to the target audience (existing DSP developers) who will be familiar with that language. If you start with C and remove all the bits that make it unsafe for DSP (memory allocation, recursive functions, system calls) and add the bits that are needed to leverage SIMD type architectures (native vector support) you end up with something that looks like SOUL…

Now I think what more people using the system are going to be sound designers plumbing together pre-existing processors into a graph - they are going to assembly chains of processing components to build channel strips, or synth voices, without really getting into what the DSP is doing. Instead, their work will be basically parameterising pre-existing processors to produce new effects - it’s basically the equivalent of building a patch for an existing instrument, but with the instrument/patch distinction blurred. To make this possible we really need a graph based editor where you drag/drop components and fiddle with the parameters. We are some way off with decent tooling, but are thinking about it.

“To make this possible we really need a graph based editor where you drag/drop components and fiddle with the parameters.” : AFAICS this will require much more sophisticated ways of accessing properties (like Endpoint annotations, inputs/outputs…etc…) of existing processors in a generic way.

This is something that Faust already does: see http://faust.grame.fr and https://github.com/grame-cncm/faust/tree/master-dev/architecture/soul

Yes, there are probably millions of shallow JS devs who have never used a statically typed language or bothered to understand what’s under the hood.

There are also lots of coders (but far fewer!) who are interested in and capable of writing DSP algorithms.

But I reckon the intersection of these two groups is very small!

And for anyone who doesn’t know anything about “proper” programming but wants to learn DSP coding, then learning about types and low-level stuff is not going to be optional, no matter what language or framework they use.

But we do have a vision for attracting the shallow, copy-and-paste coder community to SOUL. I think those are going to be the people writing SOUL graphs by copy-pasting some SOUL processors (i.e. the bits with the real code in them) that proper DSP experts have written and shared.


To be clear, I am not suggesting “make SOUL similar to JavaScript”. I’m suggesting “meet people where they live”.

I tend to agree with Jules and Cesare that the kind of higher-level abstraction I’m seeing the need for would not primarily target DSP developers… although I have been surprised many times in my career by what people can create when they have the tools, so I wouldn’t rule it out either.

Speaking from my own experience, I learn whatever languages are necessary to get the job done - but I learn new concepts more quickly when they are presented in a language I am familiar with, and I tend to favor technologies that allow me to extract business value more quickly. Even though I know C++ and Objective C, I will write in another language if I possibly can.

There’s a whole world of really smart programmers who don’t know C (and who don’t have computer science degrees). I don’t consider them “shallow”, they just do a different kind of programming. Somewhere in the pile of JavaScript, Python, Java, and C# developers are some significant number of people who always wanted to try to make their own audio effects, but are put off by the complexity. Juce was the first thing I ever found that made it remotely accessible, but I had to learn a ton before I could even start. From an intellectual standpoint, I loved how Juce was designed, but in practice I found it hard to experiment and iterate in. SOUL is more promising in that department, and that’s one of the reasons I’m excited about it. I just think more can be done.

But that’s all philosophical. From a nuts-and-bolts integration standpoint - suppose I’m writing an iOS app in Swift or an Android app in Kotlin or Java… how am I going to bridge between the UI and the audio processing? A set of higher-level language bindings would be pretty useful for that.

I’ve tried in quite a few places to explain why this needed to be a new language, and why using/abusing an existing one just can’t meet the job-spec for the task required.

Obviously the SOUL API will come with bindings for all the popular languages, and a flat vanilla C API so it can be exposed through any other strange target language you can think of.

That API will give you functions to compile a chunk of SOUL code, and then to connect its input/output streams to callbacks in your host app, so that parameter changes, stream data, etc can be pumped between the running SOUL kernel and your app/GUI (which could involve transmission over a network or other connection).

Ok - I think that’s what I was asking about all along. Is there somewhere I can go read more about that? It wasn’t obvious to me from your keynote or the Github readme, but maybe I missed something.

I’m sure it was mentioned in there somewhere, but easily missed, I guess! There’s a paragraph about it at the bottom of the overview doc in the repo:

We’re still working on the exact form of the API so aren’t going into much detail about it just yet.

Ok, this is all becoming clear. The overview is what got me thinking along these lines in the first place, so it was surprising when some of the comments seemed to suggest there wasn’t interest in a higher-order API. :grinning:

  • Because SOUL is an embedded language, even apps written in non-native languages like Javascript, Java, Python, etc can achieve C+±level (or better) performance, without the developer needing to learn C++ or use a compiler toolchain.

When the API is fleshed out, are you interested in the community helping to create language bindings, or is this to be a Roli-only effort?

Damn right, the more help we can get the better!