I’m trying to learn how to make VST/AU synthesisers with Juce. I’m getting along nicely with the GUI side of Juce, but have a few questions about the best way to structure a plug-in instrument, since the Juce audio plugin demo only shows an effect plugin.
Obviously I’d like the parameters of my synth to be automatable. As far as I can see, the AudioProcessor class is the easiest way to get this functionality in Juce. So the obvious thing to do would be to attach an ‘editor’ to a subclass of AudioProcessor, then call the AudioProcessor’s processBlock method from my SynthesiserVoice object. But I want my synth to be polyphonic, so it will need an individual processor for each voice. So maybe it would be better to have the editor just modify variables in the AudioProcessor objects, which are then grabbed by each SynthesiserVoice’s renderNextBlock? Am I on the right track here?
This also leads me onto the question of thread safety etc. If I have multiple voices reading those same variables, and the editor possibly modifying them at the same time, I’ll need to somehow ensure these variables are thread safe. I imagined that for simple variables, I could use std::atomic and I might get away with it. But if I wanted to start sending more data from the GUI to the audio threads, I realise I’m getting into the black art of FIFO queues and ring-buffers, which I freely admit I don’t really understand. What would it take to, say, safely pass a small struct (for example, representing a new modulation routing) to the audio threads? I have seen VFLib’s concurrency classes, and they look like a good framework for me to use, but I’m not really sure how to start setting up a system using them. Are there any examples of this floating around?
Thanks in advance for any pointers, this is tougher than I had anticipated! :oops: