MultiProcessor-AudioProcessorGraph is easy to implement


#1

I opened a thread before, but I rethought this, its not so complicated to do.

  • Every node (AudioProcessor) has its own local-AudioSampleBuffer and a Atomic

  • When the AudioProcessorGraph is initialising, it calculates how many (processed) input-buffer every node needs, this is the same, as the number of connections on the inputs per node.(Very easy to calculate!)
    (example, a mono-plugin which input is connected with two others plugins-output needs 2 input buffers)

  • The worker threads just iterating through the nodes again and again
    Whenever a node will be processed (happens when “number-of-processed-input-buffers” = pre-calculated number), it decreases the atomic-number of the connected nodes (as much as it has connections to the specific node).

The Audio-Input-node will begin, cause the number of pre-calculated buffers it requires, is zero. The worker-threads iteration loop will run, as long the audio-output nodes isn’t reaching the number of pre-calculated buffers it needs.

When a node will be calculated, it will collect all audio-data from the connected nodes (this can be done trough buffer-switching, or if it is a one-to-many connection via copying)

So, its easy, no kind of special sorting is required.

Jules could you please implement this soon? :wink:
The question, why I’m asking, often after baking my own solution, you come around with something similar (often better:)


#2

Sorry, not soon! I think describing it as “easy” is a little bit optimistic!


#3

If i send you an AudioProcessorGraphAbstract-Class (and you let AudioProcessorGraph inherit from it) and a Node-Abstract, and also move Connections and Nodes outside the class definition, would you apply these changes?
It would make my multiprocessor version better exchangeable/inter-operable. Alternatively i could write my own wrapper, but i like the idea to “fix” the api of an AudioProcessorGraph.


#4

Sure, I’ll take a look when I get chance, but I don’t know when that’ll be.