Plug-Ins are DLLs* (Dynamic Loaded Libraries).
That’s why AU, VST, AAX etc have SDKs / APIs. which are basically standards of where to expect things happening.
(the host would try to load your dll and call functions just as you do in your code which it expect you to provide/implement).
common DSP workflow is based on loops so basically you have a unit that keeps doing the same thing over and over again. (the internal implementation decides what past data it should hold or even future when delay-compensation / extra buffering involved).
Common callbacks for audio are:
- creation (which any object oriented language have, this is a constructor that first creates the object)
- prepare to play - which is where you would usually clean-up your processing first time or due to changes (such as samplerate, etc…)
process - the holy grail callback being called over and over again expecting you to be continuous in the audio signal you’re providing.
JUCE is cross-platform, cross-format. so it has wrappers which gets your basic JUCE API and connect it with the proprietary API of the specific format.
Most if not all audio plug-ins separates the view and the actual process.
They should of course must be separated and concurrent. to provide continuous audio.
So the PluginProcessor is your DSP and the PluginEditor is your view.
JUCE code is wide-open which is very helpful, the best way to get some understand of it would be hoping between classes and understand how they’re being used.