Is it possible to apply a VST to my audioapplication?

Hi folks!

I am developing an audio application, focusing on mixing some wavs in real-time and put some effect to them. At this moment, I have MixerAudioSource with 8 PositionableAudioSources. I want to modulate these with reverb, compressor, delay, gate, etc… But I can’t find a tutorial to do this in an AudioApplication, only for VST plugins. What is the correct way to do this? Can I route my VST’s input channel to my AudioSources?

Thanks for help me.

If your architecture is already based on using AudioSources, it’s probably easiest if you create an AudioSource class that can process the effects. (For example by wrapping an AudioProcessor instance.)

If you are willing to do a larger rewrite of your application, you could use the Juce AudioProcessorGraph to connect AudioProcessors together. But you would need to write various new AudioProcessor classes for that. There isn’t for example an AudioProcessor included in Juce that plays audio files.

Something I noticed by myself. Is tthe Plugin’s ‘Processblock’ method the same as the AudioApp’s getNextAudioBlock method?

Yes, they are very similar in purpose. They are methods that are called when a new block of audio needs to be processed.

And the effects in the plugins are effect the blocks of the audio. So can I apply all kinda VST effect to my AudioSources if I modulate the info.buffer in the getNextAudioBlock method?

It’s not that simple because the buffer given to getNextAudioBlock isn’t necessarily meant to be used fully, while AudioProcessor::processBlock expects that the whole buffer is going to be used. You need to make an AudioBuffer that is compatible with processBlock. (Of course without doing any memory allocations during the audio processing.)

okay, but what won’t work if I use the getNextAudioBlock-solution? Can’t I play with startSample and numSamples to reach the same result as the processblock? I don’t really want to change to plugin format, because I’ve implemented a lot of things in my current project.

You can’t yourself manipulate the startSample and numSamples, you need to use them as they are given to you from Juce/the OS. startSamples isn’t always going to be 0 and numSamples isn’t always going to be the same length as the AudioBuffer in the AudioSourceChannelInfo. (This can depend on a number of things, you might not be currently seeing that behavior in your set up, but the Juce AudioSources have been designed so that can happen in certain situations.) You will need to figure out a way to make an AudioBuffer for the AudioProcessors that has the audio starting at sample 0 and numSamples long.

Something like this in getNextAudioBlock :

juce::AudioBuffer<float> plugbuf(bufferToFill.buffer->getArrayOfWritePointers(), 
// dummymidibuffer needs to be a member variable : MidiBuffer dummymidibuffer;
if (plug)
  plug->processBlock(plugbuf, dummymidibuffer);

This is safe to do in the audio thread because it uses the AudioBuffer constructor that refers to already allocated data from the buffer in the AudioSourceChannelInfo.