SOUL first beta release!

Hello! Just a note to say that we’ve quietly slipped out our first v0.8 beta release of SOUL today!

All the info you need should now be in the repo:

It includes the following items:

  • Support and specs for the SOUL Patch format. This is a new audio plugin format for writing SOUL-based cross-platform plugins.
  • Dependency-free C++ headers and dynamic libraries to allow developers to load and run SOUL Patches in their apps on OSX, Windows and Linux.
  • Utilities for loading SOUL Patches as a juce::AudioPluginInstance, to make it easy for existing JUCE apps to load patches.
  • A command-line tool which does a range of tasks including:
    • Load and run a SOUL patch with live code reloading
    • Compile a SOUL file and emit errors, for integration into IDEs
    • Generate C++ or HEART from a SOUL file
    • Create new patch projects
    • Generate visual graphs from SOUL files
  • A refreshed soul.dev playground which supports patches, and has new input and output features

Also coming very soon is a release of Tracktion Waveform which has built-in support for SOUL patches, allowing them to be live-coded while the DAW is running… Hopefully available in the next day or two.

Looking forward to our SOUL workshop at ADC on Monday, and then the many more years of slog to keep building this tech into something even more amazing!

29 Likes

Some feature requests for the SoulPatchDemoHost (I know, I know, beta and just a demo but still…) :

  • Ability to change the audio hardware and its settings. (Is MIDI hardware input supported?)
  • CPU meter, I am sure many people would be quite interested about the CPU use of the SOUL patches.

Yeah, I deliberately kept that demo app to an absolute bare-bones minimum, I don’t want to start bloating it with stuff like that. We’ll probably at some point start releasing more complicated host apps. I’d love to build a graph-based host which visualises all the data flow between the nodes and lets you do fancy analysis, but we’ve not had the resources to do that yet.

5 Likes

Well, I can probably add those myself.

Since no example of using multiple soul-files together is apparently included, I am trying to cobble together my own, by making the sampled piano go into the reverb. I bumped into an issue that the piano has a mono output while the reverb expects 2 inputs. I tried the obvious :

volumeProcessor.audioOut -> reverb.audioIn[0];	
volumeProcessor.audioOut -> reverb.audioIn[1];

But that produced an error “Language feature not yet implemented: Channel indexes!”. Does this mean that at the moment, a separate Soul processor needs to be written that deals with I/O channel count mismatches?

Yes, I think you’ll need to write a processor at the moment to alter the shape, something like:

https://soul.dev/lab/?id=1c2a29277dce81b6931b796d97f43b32

We’re adding features as we go, and it’s obvious duff that you need to write these sorts of things, so we’ll be adding some quite sophisticated routing to the graph connections to allow you to do things like the connection you attempted…

1 Like

OK, thanks. I noticed there was another version of the reverb in the repo with a mono input. But it’s not working either, the sound appears to stop passing at some point in the signal chain. Are processors/graphs written in separate .soul-files actually even supported at the moment…?

All modern plugin music plugin APIs now support sample-accurate automation of a plugin’s parameters.

I notice that the SOUL RenderContext struct has no support for sample-accurate parameter updates. Are there any plans to support this feature of modern plugin APIs?

Likewise the RenderContext API is limited to 3-byte MIDI messages. This seems to rule out the uses of SYSEX messages. Also with MIDI 2.0 just around the corner, would it make more sense to remove this restriction? i.e. allow MIDI messages longer than 3 bytes?

Given that SOUL has a ‘clean slate’ so to speak regarding it’s API. I don’t see any reason to impose these limitations.

1 Like

I also tried directly including the Reverb processor code in the piano example, still no sound output. (If I take out the routing of the signal into the reverb and back, and make the Piano graph have a mono output, I do get the piano sounds.) Maybe I am doing something wrong, maybe a bug in some part of SOUL or the demo application.

SOUL itself is sample accurate, and deterministic. That’s been there from day one.

The render context stuff you’ve pointed at maybe isn’t making it clear how you’d use this. Say you want to update a parameter in 30 samples time, you’d call render with 30 frames of input and output to move time forward 30 samples, then update your parameter with it’s new value (by calling setValue on the Parameter), and then you render further frames and your parameter update is in place and sample accurate. It’s a synchronous interface, so the caller controls how time moves forward, and so the event can be applied at a given sample time.

Behind the scenes we model parameters as either events (the soul processor will receive an event at the exact sample when the parameter setValue was called) or as a stream, in which case you see a step change in the value at the sample, or, if you’ve setup the stream with a ramp, you’ll get a smooth change of value starting at the sample when setValue was called ramping towards your target value at a specified slew rate.

midi messages - yes, restricted at the moment to the simplest messages, but we’ve thought this though, and just haven’t got around to implementing longer messages yet. SOUL supports them, but they aren’t currently exposed through the RenderContext interface, so you can write SOUL taking sysex, you just can’t pass the messages in through that API. Likewise, we don’t support emitting midi in that interface (it’s restricted to channel streams), and also, it’s only float, no double support, but this is all on the list!

2 Likes

The problem you’ve got in that example is you’ve taken two graphs which have parameters (and default values) and combined them, but not propagated the parameters to the top level graph. What this leads to is the reverb’s settings being all set to ‘0’ since, as they aren’t mapped anywhere, no values are every submitted. This means that both the dryLevel and wetLevel of the reverb are set to 0, and hence you don’t hear it…

I’ve updated your example and copied the reverbs parameters to your new main graph, and wired these parameters through to the reverb, and it’s working for me. Here it is in the playground:

https://soul.dev/lab/?id=d597fb41ed6ac49030b1ad02b177bd35

This problem with having to duplicate events/parameters up the graph is annoying, and something we’ll be sorting out. The problem is that sometimes you really don’t want all parameters to make it to the top (and be visible to the user), but quite often you do when composing like this. Once we get past writing these graphs by hand and instead having some graphical tooling where you ‘draw’ your graph, then we’ll by default propagate the parameters up, and this sort of composition will be trivial.

1 Like

Thanks! Heh, I kind of suspected it might have something to do with the processor parameter (default) values, but didn’t try looking into it further.

Hmm, something that maybe we need to explain better is the difference between SOUL itself and SOUL Patches…

Like Ces explained, the underlying SOUL platform is completely sample-accurate and works with any type of input/output/stream/event channel data.

The patch format just uses a subset of what SOUL can do, in order to create a plugin-style format that’s familiar enough to fit alongside the other existing ones.

And this is only v0.8! We’ll certainly be pimping up the Patch API as we progress, there’s lots of stuff to add. For the initial release, because I added the juce::AudioPluginInstance wrapper class, which doesn’t itself support sample-accurate parameter changes, there was no urgency to add any more explicit mechanism for handling that (since like Ces said, you can do it perfectly well by slicing the blocks anyway). However, a big project for us in 2020 will be to define a channel data stream multiplexing format, which will allow an arbitrary number of sparse or non-sparse channels to be efficiently and sample-accurately multiplexed, and as well as becoming our network transport format, that’ll probably also be act as an alternative, more powerful interface to the patch rendering system.

2 Likes

Looks very exciting!

Question/request: Would that be possible to build the SoulPatchHostDemo and link to the library statically, instead of manually copying a pre built DLL?

I did manage to just add and build the included modules (which required C++17, which is cool), but it seems like SOULPatchAudioPluginFormat can only be constructed with a path for the DLL.

Well, we’re keeping the stuff in the DLL closed source for the moment, while we figure out our strategy and it’s a lot simpler to offer a DLL than all the varieties of static lib which people would need. (C++20 modules will be the ideal solution for this, one day!)

2 Likes

Can’t add anything regarding the business side.

I just meant that it would be ideal to just have modules to build/link against so we could build debug versions, step through the code, etc…

But I definitely understand you not open sourcing everything… I really appreciate you putting it out there as is, it’s already a great resource.

1 Like

@jules would love an askHostToReinitialiseOnMessageThread flag to pass to SOULPatchAudioProcessor and just have the thread call handleAsyncUpdate() directly if it’s false. I can hack it for now but I prefer to stay in sync with the SOUL repo.

Yeah… I tried to keep the whole thing as agnostic as possible about threading, it doesn’t feel like it’s the responsibility of that class to deal with the choice of thread. You’re passing a lambda anyway, so surely it’s almost no code to simply wrap a call to triggerAsyncCallback?

it doesn’t feel like it’s the responsibility of that class to deal with the choice of thread

Yes I agree… The current implementation of SOULPatchAudioProcessor derives from juce::AsyncUpdater and SOULPatchAudioProcessor::run() calls triggerAsyncUpdate().

https://github.com/soul-lang/SOUL/blob/master/source/API/soul_patch/helper_classes/soul_patch_AudioProcessor.h#L510

Blockquote

Do I need to link a closed source dll/library into my product, in order to use all parts of SOUL?
Can you clarify which parts of SOUL requires closed source binaries?

1 Like