I’m developing a not-really-audio-related software, but now I have to add some audio related stuff too. It’s a node-based system so I have a class for every functionality. In the audio topic I have to write nodes like AudioInput (mic/linein), AudioLoader (file), AudioBalance (pan), AudioGain, AudioMixer, AudioOutput.
I know and really love the architecture and conventions of JUCE, but now I’m lost. I just don’t understand the architecture of the audio stuff.
In my case I think all of my nodes should use AudioSources as input / output ports. But my AudioOutput node should stream audio all the time (even if it’s input is empty, so it’s silent) and if it’s connected to a AudioSource (the input is not a nullptr) it plays it. I have the Message and a Processing thread, and both can change values of my nodes.
I don’t need any code, but please give me some advice about how to build a system like this, which node should use which Audio class.
(Ps.: Jules - and now for all of the ROLI team - JUCE is awesome!)