I have been finished my application. I have used the AudioSources -> AudioSourcePlayer -> AudioDeviceManager pipeline. My Track class extend the audioSources and I do all the Audio stuff in the getNextAudioBlock() of each track, by handling the bufferToFill.
Considering this, now I want to add some VST effect to it and I see that everyone uses the AudioProcessor -> AudioProcessorGraph -> AudioProcessorPlayer -> AudioDeviceManager paradigm…
Does this mean that all my work with AudioSources is wasted and I have to rethink my entire code ?
And if so, could you please me suggest me some threads or tutorial where it is well explained how to use the AudioProcessor paradigm ?