I’m wondering about the ability of SOUL to feed data back to the host application for rendering purposes. For example,
- Real-time oscilloscope-like views of real-time waveforms in the GUI. This kind of view requires having access to the last N secs of an (possibly internal) audio stream which will then be processed at graphical rates.
- Running min/max of continuous audio/control signals and rendering that information on the GUI.
- Being able to enable/disable this processing based on visibility.
More generally, does the host application (GUI thread) have non-blocking access to the internal streams of SOUL? Can you use control signals to enable/disable processors?
We’ll certainly build tools to allow that kind of inspection, but it’s more complicated than just being able to watch the data from another thread!
When a graph gets compiled, the internal streams no longer exist - they may be optimised out completely or just become implicit in the sequences of calculations that the program performs. To build a tool that can monitor parts of the data flow, it would have to insert hidden nodes into the graph before running it, and these would send the intermediate data out into fifos that the host can then pick up.
This hidden node would be a processor in SOUL terms, right? So, hypothetically speaking, if we wanted to have a bunch of places in the graph that we could monitor then each of those places would have an instance of this aforementioned probe/fifo processor connected before compilation. Then from the GUI thread we would go around enabling/disabling the probe processors depending on which ones we need to render the visible parts of the GUI. Is this basically correct?
This leads to another question: Are you going to support multiple compilation units? Or is it up to the host applications to feed signals from one compilation unit to another. Perhaps bypassing one graph while we replace it with another and so on.
Well, that’s the general idea. You’d build a graph which has a bunch of extra output streams at the top level which are sending out all the bits you might want to monitor.
Not sure I really understand what you mean about compilation units, that’s a C++ term that doesn’t really apply here… The compiler will link a single graph, probably from multiple .soul files or chunks of soul code, and then you can play that graph. There’s nothing to stop you building many graphs and running them at the same time, but if you do that then it’d be up to your host to redirect data between their streams if that’s what you want to do.
Re: compilation units. Sorry, I did abuse terminology for lack of a better word I am trying to imagine what it would be like to make changes to the graph topology without stopping the audio (live patching?). My brain immediately jumped to compilation units, object files and using symbol tables to do run-time linking. Anyway, like you said, this is all probably squarely in the domain of the host application.