SOUL Lang

Something like that - it’ll need a network protocol, and rather than invent one from scratch we’ll be looking at things like Dante, AES 70 etc, to see whether there’s one that supports the kind of functionality we’ll need.

@jules
Hello, Jules~
Will the soul api have the functionality that could get the value of a variable inside user’s soul code? Or will I be able to get the return value of soul code’s function using the soul api?
Or, could I make callback to the host app’s function in my soul code? (host app could be a Java or C# app that is not native.)
thx~

Not in the traditional way - remember this code may be running on the other end of a network, so all communication with the host app is via streams and events. But using those, you can get all the functionality you’d need.

SOUL is very exciting and as @jules told on the keynote the goal is to avoid using something that will be done badly.

The one less “technical” issue that I think worth a lot of thought by the ROLI team is licensing.

SOUL isn’t only a language, it requires a supporting eco-system.
An example of 2 strategies:

  • Microsoft’s Windows Phone / Windows CE
  • Google’s Android AOSP

Both has pros and cons, one survived in a fragmented world :slight_smile:

I guess it’s still an ongoing internal discussion but I just felt it is important to be mentioned when the dream is that my code would even run on some cost-effective MediaTek device… :wink:

2 Likes

My thinking is that something like Dante has fairly wide adoption, and if you can join that ecosystem you are way ahead of the game, since adoption of SOUL by hardware vendors will be crucial to SOUL’s success. So, if vendors have Dante (or similar) devices, with internal DSP, they can be leveraged for a very effective solution.

IMHO, in five years, we will all be using network interfaces, and enjoying super low latency (can be <1ms). USB, FireWire, ThunderBolt, etc., are all just too quirky to be long term solutions. And, if SOUL catches on, as we hope, the super low latency network systems will be pretty much the only way to have a distributed system that responds effectively.

My two cents…

1 Like

First of all, when I saw the keynote, the whole concept of SOUL seemed immediately senseful to me. However coming from the electrical engineering side and having had some insight on how embedded systems for realtime audio are designed from a hardware side I see some significant changes in system design from a hardware and software side that vendors needed to implement if they wanted to make their system work that way.

While I really like the idea of being able to offload processing to a Dante device, I don’t think that Audinate could do so much to make this possible. This should not be pessimistic but to explain what I mean, let’s look how a typical Dante-enabled audio device is built:

If you want to build a Dante-Enabled product, you just buy the Brooklyn II PCB that has everything on board to translate the network audio stream in some chip-level protocol (namely I2S/TDM) that you will use to connect it to your DSP(s). For simple systems you can configure the softcore (which is a quite limited processor) on the Dante module to handle some simple control tasks, however in a more complex design there will be an intermediate CPU handling control of the Dante interface, the DSP(s) and the user interface. Most likely there will be no connection between the Dante module and the DSP that can be used to feed the DSP with an executable. While it can be made possible to dynamically load some code into the DSP at runtime (as e.g. UAD does with Sharc DSPs or Avid does with C66x DSPs) not all hardware that is designed for a more fixed setup will even have a connection between the control CPU and the DSP(s) that allows such a reconfiguration at runtime. That being said, all this is only an option for future designs that would consider this new connection and not so much for an existing ecosystem of Dante devices with DSPs.

To make the SOUL code work on your target device you would need

  • a suitable JIT compiler that can translate SOUL code into DSP target specific code
  • some processor to run this compiler on and some possibility to load the compilation result into the DSP.
  • A hardware- and software design that allows dynamic reconfiguration of the DSP at runtime.

As I wouldn’t consider the softcore on the Dante module as a suitable processor to run this compilation I think most of the implementation work would have to be done by the hardware developers at a different stage to get a JIT compiler running on the control CPU and design some interface and DSP software at all that allows dynamic loading of the SOUL code onto the DSP and not at the Dante side. The only thing Dante could offer would be to embed some data channel in their protocol to send some SOUL code over and maybe some nice enhancement in the routing view of the Dante controller.

The really HARD work would have to be done by the DSP vendors, e.g. Analog Devices, Texas Instruments or NXP to create a JIT SOUL compiler that runs on an embedded Linux and outputs DSP-specific binaries, if you want to go the DSP route at all.

So, hopefully those non-audio-centred companies will embrace the idea of SOUL and create those tools :slight_smile:

6 Likes

I would like to think that Audinate, and other vendors, might even extend their systems to accommodate SOUL. An adapted Dante might be optimized to pass the SOUL protocol through to the DSP devices, for example. In an ideal world, all hardware and software vendors will embrace SOUL and adapt systems to work within the framework. If adoption happens quickly enough, it might even be required to remain competitive.

I know I am being optimistic. But I see SOUL as a fundamental shift in the way audio processing is handled. I’d like to think the industry will see this also, and adapt to what will ultimately be a better way to do almost all aspects of audio processing.

2 Likes

Two questions for @jules and ROLI folks,

  1. Will there be an alpha/beta release before SOUL 1.0?
  2. Will the IR be released independently/before SOUL, or are you all on track to release both at the same time?

The reason I ask is because it would be awesome to play around with different ideas for the language before a stable 1.0 is unleashed.

1 Like

We’ll be inviting a few people we know to join in an alpha before we release it publicly.

And we’ve not exactly decided yet when we’ll reveal the IR (mainly because it could still change and we don’t want to confuse things by showing it and then refactoring it later).

As I told Cesare during ADC, very excited by this.
Of course, statically typed is the only sensible way, otherwise the guy may start doing double precision computations and the embedded system has them only with a software emulation layer. You have to be able to specify what you are doing -> statically typed (even in Python, in the scientific world, you use static types, so non issue).

I think if there is one thing that can make be stop doing my current job and go for audio in a professional manner, it’s SOUL and the embedded parts, with all the scheduling that you need to orchestrate. Lots of fun to develop and create.

4 Likes

I am forwarding here some comments I made about SOUL to the Csound users group. I am a composer, a music programmer, and a longtime contributor to Csound.

TL;DR: Julian Storer wants to have one ring to rule them all. Sounds
good, will be more work than planned, and as presented will leave out
some features and issues that may be critical for Csound users or for
composers.

Details:

I listened to the whole presentation. I don’t doubt the abilities of
Julian Storer, and I don’t doubt the technicalities of the
presentation.

I have some understanding of the issues here because I have performed
pieces I wrote using OpenGL Shader Language (GLSL) code to produce
videos and sampled some data to generate Csound scores. This was based
on adapting both ShaderToy infrastructure code and example code. Note:
ShaderToy has audio examples, which may have given Storer
the idea for SOUL. I also have experience with WebAssembly which is
also relevant.

The basic idea of running audio DSP code on the GPU or other specialized
processors using technology similar to GLSL is quite sound as far as it
goes: round trips through the middle layers of audio processing are cut way
down by running within much lower layers, and some specialized processors,
especially the GPU, can be much faster than the CPU.

The idea of providing programs as source code for a multi-target
compiler is also quite sound, this is the same as WASM and I expect
this trend to become more and more dominant.

What is missing:

– File reading and writing entirely within this low level language is
not possible. Oops, all the round trips through the middle layers come
right back in to do file access.

– No very clear presentation of how to use sampled audio, though it
will be possible by talking to the system via streams. This obviously
involves file access, see above. Not implemented yet.

– No mention of dynamic voice allocation, one of the big strong
points of Csound.

– No mention of time/frequency processing e.g. phase vocoding. Some
discussion of partitioned convolution with multi-rate buffers, which
should enable phase vocoding along the lines of Csound’s PVS opcodes.

– No mention of multi-threaded rendering, a small strong point of
Csound, which will become a bigger strong point. But Storer wrote
Traktion so should implement this.

– License? “liberally licensed” whatever that means. JUCE is dual
licensed with GPL v3 which is not directly compatible with Csound, I
prefer open source, not free software.

– Providing a compiler that translates SOUL source code properly
to multiple targets is by no means a small task. The story of Extended
Csound indicates that targets that may start out with a significant speed
advantage over the general purpose CPU will lose this advantage in a
few years unless significant resources are devoted to maintenance.

I am registering my interest in SOUL and forwarding this email to Julian
Storer.

Regards,
Mike

2 Likes

Good to finally get our first bit of negative feeback! It was all looking too easy :slight_smile:

Also good to see that most of these points are based on speculating incorrectly about how we might do things that we’ve not made public yet.

I’ll really quickly address a few of these points:

Not allowing file access (or any other kind of system call) is kind of central to the whole point of what we’re doing.

The API will come with all kinds of helper functionality to read/write files, so you can easily stream the input and output of SOUL processors to/from files. TBH I’m not completely sure what you mean by “middle layers”, but we have a system for providing random-access resources (e.g. for sample data) that nodes anywhere in a graph can access.

Why would you expect us to mention this? It’s not part of the language, it’s just a use-case that people will implement. Sure, we’ll probably have some voice-allocation utilities in the core library, but we’ll have many other things too.

A SOUL synth can certainly have a set of voices that are dynamically enabled, the same way you’d write that with any other platform. The demo synth I showed in my presentation was doing just that.

However, a more interesting problem that we can solve is to limit the number of voices to only the number needed in a particular context, so that compiler optimisations kick-in to eliminate a lot of overhead that’s hard to avoid when everything is dynamic at run-time.

I did mention this in passing, and it’s one of the most interesting aspects to our design. We designed things so that a SOUL graph can be automatically optimised for the best number of cores available at run-time, without the programmer needing to take this into account in their code. Looking forward to announcing more about how this works in the future.

No, sorry - I hadn’t even looked at ShaderToy until recently. The idea came from GLSL and my work on our ROLI littlefoot language.

We actually never considered GPUs as a potential target platform for SOUL, as they aren’t really a very good architecture for most audio tasks. However, since the announcement a few people have told us that they do have use-cases where GPUs could make sense. That surprised us, but it’s interesting and we may investigate whether an openCL backend would work in some cases.

It comes across as just a tiny bit arrogant to make claims about what’s “not implemented yet” when you have no insight whatsoever into what our status is…

We DO have a very clear plan for sampled audio, we just haven’t published the API yet.

JUCE actually includes a large amount of code under the ISC license as well as GPL3. And the way we license JUCE is irrelevant to what we do with SOUL.

I guess when I said “liberal” I really meant “permissive”… We haven’t picked the exact license yet, but we need everyone to be able to use it without restriction, so we’ll choose whatever license suits that objective.

10 Likes

I think lots of what @Gogins said is constrained by wanting to make SOUL a replacement for CSound.
For me, CSound seems like something that can use SOUL, but SOUL seems to be far more general, generic, and orthogonal to what was said in the post.

4 Likes

Thanks for your prompt response! Most of your answers are as I had hoped they would be, and it all sounds good.

I have to speculate because your talk did not cover all details and you have no published spec or API. Makes no difference to me if you think I’m arrogant or not. Don’t judge the other Csound developers by me.

I do have one request and a few additional questions.

Please provide an “open source” license for SOUL as opposed to a “free software” license. I, personally, and most of the Csound community, would be a fine with a dual license, as most of us do not sell software, our stuff is pretty much all open source. Rory Walsh’s “Cabbage” I think is GPL v3 because Cabbage uses JUCE. I don’t prefer that, but it would still work for me.

My main question is, would it be possible to create a SOUL graph in a plugin, for example a Csound opcode plugin? Csound plugins are written in C or C++, but there are Csound opcode plugins that can use Python or LuaJIT to process events and audio. You obviously anticipate that SOUL will become something of a standard, and that different SOUL modules can work with each other; that would greatly extend the efficiency and scope of Csound.

Will the language or API provide collections such as FIFOs, vectors, maps, sets, and so on?

Will the language or API provide linear algebra facilities along the lines of Eigen or boost::numeric::ublas? They provide not just matrix arithmetic, but matrix decompositions up to eigenvalues and eigenvectors.

Thanks,

Mike

2 Likes

Since this forum thread was linked on Csound forums thread, and I have particular interest in patenting. I will speak for myself, that my first impression of SOUL was “this is too good not to be proprietary”, then I noticed GPL-3 and all my prejudice went away. So I thank you for choosing free open licence, benefiting all users and the audio hacker community at large :+1:

1 Like

I don’t think they’ve yet decided what the license for SOUL will be…

Given the general approach Jules has taken to licensing in the past, I’m not at all worried about this.

2 Likes

Thanks!

I thought I was pretty clear about the fact that we’ll choose a license that makes it as easy as possible for the widest possible number of people to use it without any impediments. The SOUL API is something we want anyone to be able to pull into any project without worrying about the implications of the license.

No, you’re misunderstanding it if you imagine that you’d use it in the same way you’d write C++. SOUL isn’t a general-purpose programming language, there’s no allocation, no pointers, it’s not object-oriented, and no, I doubt whether we’d ever want high-level data structures like maps or sets, or even vectors. FIFOs… perhaps, although they’d be baked-into the syntax as part of the graph structure - that’s something we’re still considering for further down the road.

I’d be lying if I claimed to have much experience of linear algebra, but I doubt whether it’s something that’d belong in SOUL. More likely you’d use your host language to do these calculations and generate a SOUL kernel to run based on the results.

That being said, there’s no reason you couldn’t implement pretty much any maths operation in SOUL itself, so maybe others will find a use-case for this kind of library and it’ll be added. But it’s not something we’ll be worrying about in the short term.

Simpler stuff like matrix operations: yeah, that’s the kind of thing we’d put in the core library (or make into intrinsics), as it’s something that could be accelerated on DSPs where available.

6 Likes

Having watched @Matthieu_Brucher’s talk I wonder if you guys will team up? I find Matthieus JIT very exciting - a potential front end for generating SOUL IR.

1 Like

We’re talking to lots of people about all sorts of aspects of this project. It has to be collaborative to work, and we’re taking advantage of lots of other previous efforts in this area to learn from their experience and avoid possible pitfalls.