SOUL Lang


#81

Have you already evaluated integrating SOUL with the AAX SDK? Offloading multi-plug-in graphs obviously will need support from Avid, but doing a single SOUL graph per plug-in may be doable with the current state of AAX, and is a very interesting use case that could help driving the adoption of SOUL.
Somewhat related: for cases where e.g. code signing or some form of validation process is required by some target platform, will SOUL optionally support compiling a given graph to some pre-configured backends on a build server (instead of JIT on the customer’s machine)?


#82

We’ll certainly talk to Avid, but haven’t really thought too deeply about AAX yet. It’d be more of a legal discussion than a technical one.

I think if you really need to run SOUL on a signed platform them you’d probably just end up using the SOUL->C++ generator and doing a normal build once you’ve got the code right. Those platforms don’t generally permit jitted code to be used anyway.


#83

It’s good to see that SOUL->C++ will be available - this should allow for building at least a limited AAX wrapping within the current legal bounds of AAX so that we don’t have to write the processing code twice for SOUL and AAX. I think this may be crucial to some of us.


#84

Just an FYI to people who are interested in all this… We quietly snuck-out a github repo where we’ll be publishing the code. Not much to see there yet, but you may want to watch it for changes :slight_smile:


#85

After reading the draft I am a bit underwhelmed to be honest.

At the core, the system seems to enforce a sample-by-sample approach to writing DSP code, which makes it awkward to use for tasks that are inherently block based, like convolution or some multi-rate algorithms, to name just a few. It could also make it awkward or impossible to use SIMD optimization for the compiler.
I could possibly roll my own block based processor, stuff audio and call advance() in a loop, but that would clearly be a hack. How about letting processors decide how many samples they process per one call to advance() ? or have advance(nSamples) have an optional argument?
Perhaps someone will chime in and say, yes, we’ll get convolution covered as some extra module, but this won’t help if I need sth like a time-variant convolution and a cosine transform every now and then.
The one-sample-at-a time paradigm also gives beginners a wrong sense about how DSP is done and often stands in the way for them to discover or explore things that are cool and ridiculously fast but violate this.


#86

Sorry - that doc doesn’t have any info about how our block-based windowing will work, so please don’t jump to the conclusion that we haven’t thought about it!

There’ll be a type of stream which is window-based rather than sample-based. Essentially it’ll be like having a low sample-rate stream containing extremely big “samples”, each of which is a vector of pre-windowed data. This will make it really easy to write frequency-domain or specifically block-based tasks.

And please try to understand that a central motivation in the design philosophy for SOUL is that we want to stop users from manually writing code in ways that makes assumptions of what might improve performance.

We plan to let this code run on a wide range of processors (including exotic many-core DSPs, GPU/compute engines, and CPUs without SIMD, etc). It wouldn’t make sense for users to write their own FFT or FIFO, or to make assumptions that a particular buffer size or alignment would necessarily be faster than doing a task in a simpler way, because these are all things which only the runtime can know how best to implement.

Important things to note:

a) The vector type is a SIMD type - so processing done on e.g. float multi-channel samples is already a straight translation to SIMD instructions if they’re available.

b) When we perform code generation, we specialise the code with a fixed buffer size which is known at compile time. That gives the LLVM polyhedral optimiser a chance to unroll the loops and vectorise it, and in our tests we’ve been impressed by how tight the code is that it emits. It’s definitely better than I could write by hand.

c) A piece of simple SOUL is unlikely to beat the performance of a piece of hand-coded assembly that a performance guru has spent months perfecting for a particular CPU. That’s not our goal. What we want to do is to make it possible for the other 99% of coders to easily write code that’s (at least) as good as some competently-written C++ would be.
And then we add value by making it magically portable to new hardware platforms which can offer low-latency, low-power and/or higher performance.

I couldn’t disagree more!

One-sample-at-a-time is exactly the right way to think about DSP coding.

Using buffers is a historical blip! Buffers only exist because so far we’ve all had to work as close to the metal as possible, and “the metal” up to now has involved these large memory buffers that get copied to and from a CPU, and passed in and out of black-box DLL plugins.

But if you think about how this stuff will work in the future, it surely involves your audio code running on dedicated audio processors which might literally process one sample at a time (perhaps 4 or 8 samples at a time is more realistic, but essentially they’ll have a small fixed block size/latency), They’ll have hardware to accelerate stuff that people currently waste time hand-coding, and will probably use weird power-efficient core layouts with bus connectivity that you don’t want to have to understand.

Think about how things worked before openGL: When 3D graphics used to run on CPUs, John Carmack and his mates invented some incredible tricks to squeeze the maximum performance out of their Doom engine. Really bonkers stuff involving cache lines and all kinds of buffer tricks. But those tricks quickly became irrelevant once GPUs came along, and now people write all their 3D code as pixel-by-pixel algorithms, and the hardware figures out how to parallelise it.


#87

Jules,

Good to learn that some block based processing will be supported!

I also like your analogy about graphics and one-sample-at-a-time processing, but I think the comparison is a bit flawed. Samples are not pixels, and the moderate use of buffers is often part of the algorithm. Take blips as an example, placing one blip into a buffer to represent an impulse at a fractional time some microseconds in the near future is easy, doing the same without a buffer can get complicated and will ruin your code. I have seen this firsthand with programmers who were hardwired to believe that samples just have to be calculated in succession.

Anyway, nice to see that my concerns have mostly evaporated, keep up the good work!

Stefan


#88

I’m struggling a bit to imagine what you mean about calculating a blip… maybe you could give a code example?

There are definitely cases where an algorithm can be made faster if you can do a whole buffer at once.

But it can never be simpler because your process function must be able to deal with the case where it is given a block size of 1!

So even if you have an algorithm which has a nice simple implementation for a large buffer, you’d still have to write a special case to deal with buffer size = 1, so you end up with even more code, doing the same job in different code paths!

But in SOUL there’s nothing stopping you allocating an array and pre-calculating a block of data which you subsequently play back when each sample is actually needed. That’s totally valid, and is maybe that’s the kind of thing you had in mind?


#89

I hope there will be a solution for using dynamically changing audio files. (For example, for a sampler which allows the users to use their own samples…) So far it has been mentioned using audio files will be done via “resources” which smells awfully like something where the sample data can only be initialized once when the SOUL code is being compiled…


#90

Yeah, it’ll be able to do that - to send file data you’ll call an API function and either give it a file to stream from, or your own custom data provider callback. Part of that will be a function you can call to say “flush any cached data associated with this resource”


#91

Jules,

I am at peace with SOUL now, my last comment was only about the negative impact the one-sample-at-a-time architecture can have on the mind of programmers, especially those new to DSP. I chose the blip example because it represents an impulse in (fractional) time which usually cannot be represented by a single sample, the easy way to handle this is to throw a bunch of samples into a buffer and forget about it - but I experienced more than one programmer who made things extremely complicated just to avoid the simple buffer solution because they had internalized the one-sample-at-a-time paradigm.

Stefan


#92
processor EventToStream
{
    input event float eventIn;
    output stream float out1;

    event eventIn (float f)
    {
        // Output 2* and 5* the received event value
        eventOut << f * 2.0f;
        eventOut << f * 5.0f;
    }  
}

I guess should be:

processor EventToStream
{
    input event float eventIn;
    output stream float eventOut;

    event eventIn (float f)
    {
        // Output 2* and 5* the received event value
        eventOut << f * 2.0f;
        eventOut << f * 5.0f;
    }  
}

#93

@jules after reading the language guide I’m a little confused on the Graph structure. Are process graph nodes/connections fixed at compile time? Or is there some way to insert new nodes, remove others, and move connections around in the Graph at run time?

On the stream operators, do they function like C++ stream operators, as in does myOutput << sin(phase) << cos(phase) have any meaning?

I think it would be great to disallow uninitialized variables (even implicitly zero-initialized). That’s a bug 100% of the time, even if the value is 0. It’s just probably going to be zero for some DSP code.

With variables

let [name] = [initial value]; - infers the type from the initialiser, and the resulting variable is const
var [name] = [initial value]; - infers the type from the initialiser, and the resulting variable is non-[typename] [name]; - creates a mutable variable of the given type, which will be implicitly zero-initialised
[typename] [name]; - creates a mutable variable of the given type, which will be implicitly zero-initialised
const [typename] [name] = [initial value]; - creates a const variable of the given typ

Why is the declaration syntax not consistent? (multiple ways to declare the same thing is a pain in my butt for maintenance) e.g.:

var [identifier] -> non-const variable with inferred type
const var [identifier] -> const variable with inferred type 
typename [identifier] -> non-const variable with explicit type 
const typename [identifer] -> const variable with explicit type

This looks really cool, I’m excited to get to play with some examples when you guys are ready to drop an interpreter.


#94

Yes, it’s compile-time, you can’t change it while it’s running. In a future version we’ll probably allow you to create parts of a larger graph which can be swapped-out in realtime (for optimising big DAW apps etc), but you should think of the graph as a program, not something that changes dynamically.

Not really, as the second value would just overwrite the first, unless you call advance() in between the two writes. So no, the operator doesn’t support chaining like that.

I guess we could force people to write = 0 on the end of variable declarations, but I’m not 100% convinced it’d be helpful… Interesting idea though, we’ll consider it.

Don’t really understand your problem with the declarations - we spent a lot of time looking at how other languages do things, trying them out, asking opinions etc, and I think let reads really nicely and is very familiar to people who use other languages. It’s also going to probably be the most common way you’ll write a declaration so being short is important.


#95

Don’t really understand your problem with the declarations

It’s not that I have an issue with the declaration syntax, or the keyword let. It’s that I feel like it’s inconsistent with the other variable declarations.

For instance in ML languages where you have something like

let [mut/const] [identifier ] : [type annotation] = [expression];

With the identifier, let keyword, and initialization expression being required, and the [mut/const] or [type] annotations being optional (mut or const as default) or inferred. I like that kind of syntax because it’s consistent, there’s only one way to declare a variable. The nuanced bits are optional annotations on top of it.

I get that I’m bikeshedding a bit here, but I just feel that having let imply const-ness is a little confusing. My overarching concern is that it’s confusing whether mutability is implicit or not, especially when you have multiple ways of declaring something to be mutable or const. In terms of maintainability, I’m worried about the C++ situation where there are like a dozen different ways to initialize the same variable, which is why I enjoy the consistent, terse syntax from the ML family.


#96

@jules I’ve just started reading the language guideline and whilst reading about arrays and array slicing I wondered if you had considered supporting negative vales for an index, for example…

x[-1] // returns the last element of array x
x[-2] // returns one element from last of array x

I’ve certainly found this handy in python at times. However it’s a small thing and you may have already considered it and dropped it for reasons I haven’t yet considered.


#97

that just seems like it would introduce bad habits for people who use Soul as their first language, and then move to Java/C++ and think that negative index values are a thing. a huge part of programming is muscle memory of typing certain patterns. It’s like when you only program C++ for a long time, and then work with a language that doesn’t require semicolons at the end of each statement but you add them anyway.


#98

I think would love to be write DSP code in terms of one sample at a time.
I think I would like to be able to process incoming midi events on an event by event basis too.
It’s a weird thing about midi processing inside JUCE that it is different to how a normal GUI operates which usually has a callback that occurs on an event basis.
I think it would be lovely to forget about buffers and just process midi events as they occur and leave the buffering etc to the smarter more experienced people building the system above me.
Perhaps it’s my inexperience that makes me think this way but I love that idea. I think it could be way simpler than it is.


#99

I don’t find that to be a particularly convincing argument to not implement something. If every language followed that rule we would never progress. Not to mention that it wouldn’t be the first language to have it, as already mentioned python and I believe Ruby too, I suspect there are others.

If it doesn’t make it in fair enough, I just think myArray[-1] is a lot cleaner than myArray[myArray.size - 1].


#100

Ah, the bikeshedding begins… :slight_smile:

Honestly, ‘let’ was one of the easier syntax choices that we had to make - it’s pretty much the standard way of declaring constant, type-inferred variables in the majority of modern, popular languages. We have a target audience in mind of people who use javascript, java, C#, etc., so sorry to any ML fans!

We actually get that behaviour implicitly, thanks to the fact that the index gets modulo’ed with the array size (and this will be free for compile-time constants since the array size is also known at compile time)