Is the process() function working like a coroutine? Is advance() like a yield which gives the control back to the code that outputs the audio to the audio driver?
Thanks for your response Jules - braces or not, Iām looking forward to giving it a try!
Thatās the general idea. Behind the scenes itās a bit more complicated, but essentially you just write the processors like threads and let the API worry about how to run them. They may be coroutines or threads, or something more exotic, but this is hidden from the programmer and doesnāt affect how you write the code.
Functional programming guys will surely be happy to have their code running on the IR layer. An maybe we can even imagine to mix functional and imperative programming in new manners, which is quite challenging.
You said bindings and that made me think.
So in the worse case scenario, you have say an Android device and use this language and nothing on the metal. Correct me if I am wrong but as I understand, you will have bindings in Java that I would call(JNI) to āloadā this script from some type of SOUL run-time that is JIT compiling?
Iām trying to see how this fits into write/test and where the boundaries are. Itās also possible what I just asked makes no sense at all, I am trying to find the correct words.
And for DSP gurus like Julius Smith, thinking āfunctionalā seems the way to go, see this video starting, at 14:00:
Thereās nothing wrong with functional, as long as everything can be directly expressed at a high level with the language and its hopefully very comprehensive standard library. Things get complicated and frustrating when things using low level operations need to be written from scratch.
Thereās also a mismatch between audio processing and functional programming with regards to the referential transparency thing. With audio you need to do algorithms where the same number put into a function multiple times does not produce the same output each time. (Because the result depends on the past signal. Read : filters, delays, reverbsā¦) Also random numbers are important and not really in the āspiritā of functional programming.
Audio DSP in itself is obviously functional in nature. But in reality the dirty details always get in the way of pure functional idealism. Just give me my loops and mutable state.
Iāve been wanting to do code like that in C++ but the coroutines support is just too weak and experimental for that at the moment. (And threads would be just too messy to deal with for thatā¦) So I am very interested to see how things will work with SOUL.
Is SOUL development going to be fully open source with an RFC process and all that?
OpenAL is basically the OpenGL for audioā¦ itās old and outdated.
It used to be used for games and was a fair alternative to FMOD at the time. Now itās just a pile of rubbish.
About a decade ago, I used PixelBender in a Flash plugin to do synthesis for a 6-voice synth emulating a Kurzweil K2600. PixelBender was Adobeās scriptable GPU language for doing graphics processing on the GPU directly inside flash plugins without writing any OpenGL or Shaders. https://forums.adobe.com/thread/16332 I had to code the pixelbender shader by hand in āpixelbender assemblyā, but all of the audio was generated on the GPU instead of the CPU, which was rad. Latency times were surprisingly low for a Flash plugin too.
OpenAL and SOUL pursue actually a different goal:
OpenAL was used for spatialisation, i.e. adding geometric information to a rendering pipeline. OpenAL is rather similar to OpenScenegraph (funnily this was my first audio program ever in early 2000s to add OpenAL audio sources into OpenScenegraph).
But Soul is talking about where DSP code is executed. Like OpenGL uses the hardware, SOUL will abstract from the hardware, allowing to shift the audio generation further down the pipeline.
But SOUL is not specific to any spatialisation, that OpenAL does.
Please make it statically typed Jules. Donāt listen to the python devils, they just want your SOUL!
I wonder about the concepts like processor, stream on your slides. If this isnāt a general purpose language, how much of fixed pipeline is there?
To make an analogy to OpenGL and GLSL.
varying vec4 vColor;
void main (void)
{
gl_FragColor = vColor;
}
The fragment interpolation of the varying is basically implemented on hardware. Same for
out = texture2d(textureSampler, uvCoordinates);
Will there also be such basic concepts for SOUL? Some sort of fixed pipeline. Similar to a rasterizer on GPU? And vendors have to support and implement them?
oscillator Beep
{
output out: stream float;
varying phase: float;
void process(void)
{
out << 0.1f * sin (phase);
}
}
With the potential to do something like this:
interpolator myBuffer : float;
void process(void)
{
out << 0.1f * sample(myBuffer, phase);
}
What is sample() doing? Well, it basically depends on your previously set interpolation quality and reads your buffer with interpolation.
interpolator myBuffer : float;
soul::setInterpolatorQualiy(myBuffer, LinearInterpolation);
soul::setInterpolatorQualiy(myBuffer, SincInterpolation);
If we talk about simplifying development and reinventing the wheel. Why do we always have to re-write interpolation, phase wrapping, oscillators, samplers, delay buffer, convolution and especially FFTs from scratch? Okay, maybe itās a bit too specific. But just imagine. FFT, Oversampling and Interpolator implemented on hardware. This would be awesome.
Now. SOUL is definitely exciting.
Sorry to be the sceptic one here, but this sounds like this is the end of IP and code protection?
Surely a well defined IR/VM will have reverse compilation, so this will lead to people ripping those important DSP bitsā¦
Already did. Never considered any other possibility!
I didnāt have time to dig into the tech details in my talk, but interpolation and up/down sampling are built into the fabric of the graph structure at a deeper level than youāre suggesting here. Itās absolutely part of the goal that if youāre running this code on a chip with acceleration for things like interpolation, FFTs, fast oscillator generation etc, that it will magically take advantage of those things.
You can reverse engineer anything if you really want to. If you can run something, you can reverse engineer it.
SOULās IR would be harder to steal than openGLSL has been for the last 10 years, and is equivalent to Vulkan/Metal in terms of having a compiled instruction set. It doesnāt seem to have been a particular problem for any of those technologies.
Besides, remember that the DSP code is only a bit of your app or plugin. On its own itās kind of useless without all the other glue code to make it work, and a great UX for people to choose to use it. I donāt think thereās a problem with people stealing algorithms and using them in rival products - anyone whoās smart and serious enough to actually build a whole product and attempt to compete with others probably doesnāt need to copy anything, and wouldnāt take the risk of being caught.
And the other type of copying - pirates who just crack the whole thing - is almost impossible to avoid anyway, even with 100% natively baked, obfuscated programs.
Itās also worth noting that the DSP is already separated into separate dllās for things like AAX DSP, WPAPI, and UA, so it certainly isnāt any less secure than that. Adding any serious code protection to DSP is generally not a great idea anyway, as it often has a negative effect on the efficiency of the algorithm. Not to mention that in many cases you can figure out exactly what the DSP is doing with a bit of careful analysis.
@jules it seems like the SOUL design philosophy is a natural for marriage with low-latency technologies like Dante. Is that sort of thing on the radar?