SOUL Lang


#121

Without a keyword to introduce it, a mistyped assignment to an existing variable creates new variable instead, which is hard to track down (been there, done that). We think that c++'s auto is a pretty bad name, but they are unable to add extra keywords, so they reused auto for this purpose.

Having decided on a keyword, we went with var as this seemed pretty clear (as used in C#). The problem with const var is that it is lots of typing, and doesn’t line up with var declarations, so let seemed like a sensible choice, and will be familiar from haskell and other functional languages. The only downside is possible confusion for javascript programmers.

At present, no. However try, catch and throw are reserved words so we’ve got that option open. My thoughts are that yes, we will support this in the future, but that is likely to be a V2 feature.

We’re in the early days with this, and at the moment we are using purely textual editors for designing and coding our graphs. The modular structure of the code lends itself to individual components being unit tested with test inputs and expected outputs, and complex graphs being composed of these tested units.

I’d expect in the future the IDE to be graphical for building the graphs, with text editors for the processors themselves. If we get this right, most sound design will be at the graph level, with few people venturing into the internals of the processors to design a new filter or oscillator.


#122

Sure, I agree! I was suggesting, if let effectively means constant, using const instead of let in the language. So in SOUL_Language.md where you say

let [name] = [initial value]; - infers the type from the initialiser, and the resulting variable is const

instead my suggestion would be to have

const [name] = [initial value]; - infers the type from the initialiser, and the resulting variable is const

That way, var means variable, and const means constant. There wouldn’t be any const var, as that would be contradictory.

Perhaps that’s quite a big downside though - isn’t javascript by far the most-used language that uses both ‘var’ and ‘let’?


#123

When we brainstormed this decision, we did consider a bare const, but overall the team voted for let. We based that on our own gut reaction to the code, as well as looking at what other modern languages had chosen, and wanting to gently encourage people to think in terms of const locals by making it the easiest, shortest syntax to type.

Although there’s a lot of variation between languages on this one, we made a list of what others do, and let was the most common choice: Rust, Swift, Haskell, and some others do the same. And obviously it’s standard mathematical notation for declaring a constant, which has got to be a big factor for us, as many DSP people have maths backgrounds.

(Sure, let in javascript is weird, but so much of JS is weird and quirky that those guys must be used to having to deal with that kind of thing! But at least to a JS programmer it’ll look like a familiar syntax, and the scoping rules are the same)


#124

On that note it would be nice if JUCE better integrated with AAX DSP. IIUC a key thing that’s missing is a way for the processor to communicate with the editor other than directly accessing members (as they don’t run on the same machine), and the same issue will be needed for SOUL plugins with UIs and other platforms like WPAPI.


#125

In SOUL the issue kind of goes away because it forces a separation of the processing from the client app via streams, which can be abstracted over network or other transport layers.

For AAX I’m really not sure whether there’s anything cross-platform we could add to JUCE that would help matters, but that’s a topic for another thread if you think there is.


#126

I think that’s exactly the same thing for AAX DSP and WPAPI.


#127

On that note I’ve been meaning to ask these questions about SOUL for a while now.

  1. I would like to understand better how data will be shared between processor and editor (code example will likely do), for example in order to show an FFT, envelope shaper, or even just some basic metering. This can be a sticking point for other offloaded DSP solutions (AAX DSP, UA, WPAPI) the amount of data, the time it takes, how regular the data can be passed, or worse still when you have to make sure there isn’t disagreement about the alignment / padding of the data between processor and editor.

  2. How are you expecting to deal with DAWs that would support other native plugin formats alongside SOUL. For example imagine the following…
    In a plugin chain you have…

  • plugin A (SOUL)
  • plugin B (SOUL)
  • plugin C (VST)
  • plugin D (SOUL)

There’s several things to consider here…

  1. If SOUL is being optimised on a DSP chip theres additional overhead in passing audio data back up and down, in the best case scenario after plugin B we have to pass audio back up to be processed natively and then back down again for plugin D

  2. Will there be a way for a DAW to optimise such that it can combine A and B to become essentially one processor and then D to be another? Alternatively Waves have gone in the direction of supplying a single plugin that then itself hosts only WPAPI plugins - could do something similar for SOUL? or UA just have the buffer passed back and forth for every UA plugin in all DAWs except the UA console from which they can guarantee that the only plugins running are UA ones so they can have everything run on the hardware without anything being passed back to the native machine (except data required for editors). This is one of those areas where AAX does well because it is possible for Pro Tools to optimise as it knows which are AAX DSP plugins and which are AAX Native plugins.

I guess another question extending from that is, will there be a way for determining the most optimum way to run something? for example should it be native, in the driver, or DSP if all three are available. As demonstrated above there can be additional overheads in some scenarios, also users may like to select which instances run where, as a DSP chip might be limited in how much it can run and therefore the user may want to be selective about which SOUL processors/plugins run where.

I’m sure these are all things that you’ve considered but it would be good to understand these cases as IMO these are the points that could really set it above the rest for both developers and users.


#128

Streams are the only way that a SOUL processor communicates with the outside world. In your app/GUI code, you’ll call API functions to connect your own custom callbacks to SOUL streams, and that how you’ll send/receive parameter changes, events, etc. We’ll define a standard format for packing and alignment of the data types so that any front-end using the API won’t have to worry about the target architecture that’s actually running it - the data will be converted on the way across.

This will be a problem for individual DAW engines to solve in their own ways, but we’ll obviously want to use the Tracktion engine as a reference implementation so will have to solve it.

Yes - like you said, there are a bunch of possibilities based on the overall structure of the graph.

If there are 10 VSTs and 1 SOUL plugin in the middle then you’d want to just run an old-fashioned graph and call the SOUL plugin’s JIT to render that one plugin. If you’re lucky enough to have a few SOUL plugins chained, then yes, it can easily merge them into a single JIT-ed lump. If you can divide the graph into parallel streams of SOUL/VST processing, they could be allocated to different cores and re-combined. Or if your graph starts with VSTs which all merge into a string of SOUL effects on the master output, that’s also a layout that could be handled specially. And there’s also an option to run all the VSTs with one buffer’s delay, and feed them back into a single SOUL graph at the points where they fit.

It’s an interesting problem, and will probably take years to fine-tune! …which is OK because it’ll also take years for old products to migrate across to SOUL.

Well, our hope is that we’ll be able to leave all the hard decisions up to the API in terms of the best place to run things, but it also seems likely that once things get to the point where there’s a lot of complicated hardware involved, we’ll probably end up needing to add some custom flags to let apps tweak the exact behaviour to their needs.


#129

This makes me very happy - although I wouldn’t have expected anything less from yourself.


#130

Is the C like syntax: (cond) ? (then) : (else); like syntax available in SOUL? Or should if/then/else be used instead ?


#131

That’s the ternary operator and it is available according to the documentation.


#132

Thanks… I read too fast :wink:


#133

Can we hope to have SOUL runtime running on iOS at some point? I mean will ROLI be able to convince Apple to accept a controlled runtime + JIT for audio ? This would be just crazy if SOUL source code could be then dynamically compiled and run on iOS :wink:


#134

Yeah, we have a few ideas about how to achieve that.


#135

OK, I guess same for Android then. So working on our Faust => SOUL backend at source code level makes even more sense :yum:


#136

Oh, yes JITting stuff on iOS would be awesome. I am currently writing on a JIT compiler for my project but the limitation of not being able to use it on iOS is pretty lame.


#137

I’m curious. what do you gain using a JIT compiler over a normal compiler like Clang or GCC? Why doesn’t Apple allow it on iOS? I’m assuming a JIT compiler is like the one used in ProJucer’s Live Build?


#138

iOS doesn‘t allow jumping into and execution of dynamically generated code (memory with the executable flag set) for security reasons. I never actually tried it but I read on many places that your app just crashes when it jumps into executable memory you have created on runtime.

I am building a visual programming environment and there‘s a more or less 10x-100x performance increase when the nodes are compiled to assembly code vs running an interpreter with virtual function calls, so that‘s why I need a JIT compiler.

The same problem might apply to SOUL - resorting back to an interpreter on iOS would definitely defeat its purpose - I just hope that Apple looses the restriction for everybody not just ROLI and just add an entitlement that you will have to request (with a very good reason).


#139
  • I tried the LLVM path in the past on iOS: compilling the LLVM chain, running my test program on Xcode in debug mode, JIT was actually working, I was happy :slightly_smiling_face:, then as soon as I deployed the binary, boom ! it was not working anymore because of the memory executable flag set exception…:weary:
  • we tried the interpreter path in the Faust project, we have something like between 3x and 15x slowdown compared to the native (C++ or LLVM) path. It is usable but only on fast machines, and with not too complex DSP programs
  • one way to do JIT like stuff on iOS is possibly using the WebAssembly path (since JavaScriptCore on iOS support wasm AFAICS)
  • but using a Apple authorized SOUL native runtime would be probably much better

#140

There is a entitlement for macOS apps distributed for the app store:

https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_security_cs_allow-jit

I don‘t see why this can‘t be extended to iOS - I guess it‘s just a matter of nobody asked before so they went with security first.