Hi folks!
I’m currently developing a vst where I’ve to process multi audio source audio and I’m wondering what is the best architecture to do it.
I’ve 2 subclasses of AudioTransportSource
, one for the vocal and one for the background.
Each of them has its own AudioProcessor
(its custom processor chain). The process is applied in getNextAudioBlock
overriding.
This 2 tracks are added to a MixerAudioSource
.
Everything works great in real time.
The final result, which is the rendered vocal track with the applied effect works well too.
However, now, I will have to do a offline preprocessing effect before rendering the vocal track. This effect needs to get the whole audio data array and the 2 tracks. I need to process the background track to extract some feature and use them in the vocal track AudioProcessor as parameter. It would be like this:
backgroundData -> offlineBackgroundAudioProcessor -> processedBackgroundDataInformation
vocalData + processedBackgroundDataInformation -> offlineVocalAudioProcessor -> final audio
My question is how can I manage to do it?
My first solution would be to do 2 offline audio processors, BackgroundAudioProcessor and VocalAudioProcessor. And enable the BackgroundAudioProcessor to give an getter to its result. Also an offline renderer mixer will take care of the processing queue.
It will look like this:
OfflineRendererMixer::render(
AudioFormatReader* backgroundReader,
AudioProcessor* backgroundAudioProcessor,
AudioFormatReader* vocalReader,
AudioProcessor* vocalAudioProcessor
) {
//… all stuff to create audio buffers and prepare the processors
backgroundAudioProcessor.process(backgroundBuffer);
let backgroundInformation = backgroundAudioProcessor.getResult();
vocalAudioProcessor.setData(backgroundInformation);
vocalAudioProcessor.process(vocalBuffer)
// vocalBuffer is the result of the process
}
What do you think about it? I feel that a better solution could be found but I cannot manage to know which one