As a Supercollider user, SOUL -> UGen compilation would be a dream come true. Making that happen is well beyond my skillset so I hope it’s not bad form to make the suggestion, but I’m just throwing it out there.
I’m not really much of a supercollider expert, but I assume it’d be possible. Unlikely to be something we tackle ourselves though, it’s more the kind of thing that someone in the supercollider community might do in the future once we’ve opened up the platform a bit more.
TBH more interesting to us would be the possibility of supercollider running on top of soul as its execution platform, but I don’t know enough about the supercollider architecture to know if that’s possible or not.
SOUL graphs should be relatively easy to compile to a Ugen / run as a Ugen - the main requirement is that SOUL memory allocation / deallocation would need to be modular enough to connect to SC’s realtime allocator. Beyond that - simple SOUL-based Ugens could end up being rather inefficient depending on how they access things like input parameters (e.g. if SOUL generated code expects input data as a C struct in a specific layout, this would likely need to be copied for every execution, which is obviously not ideal). But as long as a SOUL patch could be configured to real input parameters from arbitrary-ish locations in memory, this would be no problem.
SOUL would be especially interesting in an SC context because: in general, SC unit generators have to be hand-coded for different combinations of time-varying vs static input parameters. This is extremely important (a fixed filter freq vs control-rate varying filter freq vs an audio-rate varying filter freq can be an order-of-magnitude difference in terms of compute cost), but is a huge cost in terms of development and maintenance. If SOUL can handle the generation of different machine code for different combinations of (non-)time-varying parameters, this would be a huge win. Halide more or less has this figured out, by allowing you to separately describe the graph of operations AND the execution strategy (allowing for compiled versions of a kernel that are specifically optimized for particular pixel types, compute architectures, or boundary conditions). Audio dsp programming would really benefit from something with a similar kind of definition vs execution distinction.
Implementing SuperCollider’s full engine using SOUL is probably possible (there are already at least two separate scsynth implementations as it is), but I doubt if it would be worth it? SC already does a healthy amount of audio graph optimization and parallelization - unless SOUL does this VERY well out of the gate, there’s not a lot of improvement to be made. Actual execution of the graph is not really an interesting problem (iterate through a list in the right order, and call a function for each node) - and, without access to the library of thousands of unit generators SC has now, I doubt if a new server implementation would take off - even if it provided some minor optimization or more manageable codebase.
I can see benefits from translating SC into SOUL to take advantage of the SOUL runtime (and hence support SC on other devices). The other way around seems less useful, but i’m not going to rule anything out as i’m not well versed in SC or the development difficulties of getting SC units written.
The fact that there is mention of memory allocation wrt SOUL suggests the models are rather different since SOUL does not support any dynamic memory allocation, so that might either make it much easier to support SOUL, or much harder depending on what the SC runtime looks like!
If SOUL can target things like SHARC chips, this could be quite interesting - you’d get a high-level control language like SC to specify signal chains and do realtime control, with SOUL handling the lower level DSP.
At the very least, a SOUL signal graph must need some fixed amount of memory to be instantiated and operate - for it to work in an SC context, this memory would need to be provided by SC’s realtime allocator (as opposed to e.g. using system things like malloc), else it probably wouldn’t be properly realtime safe.
I think SOUL and SC’s models are similar, but SC is just solving a broader class of problems. SuperCollider has a concept of fixed signal graphs (iirc these actually support realtime-safe dynamic allocation in limited ways, but in practice are usually fixed), but also has a larger architecture in which these these fixed graphs (“Synths”) can be scheduled, added, modified, removed from a larger processing graph in real-time.
I can’t imagine there’s an immediate need for SOUL to support dynamic allocation. At some point, people will be building things complex enough that they’ll be writing their own dynamic allocators in SOUL (after all - the standard voice management that a poly synth does is nothing if not a super basic allocator…) - at that point, there might be utility in providing something common and realtime safe rather than letting SDK clients shoot themselves in the foot - who knows.