Add VST plugins support to MixerAudioSource

Hi,

I would like to add VST plugin support to our application.
Untill recently we used FMOD for sound playback in our app but we needed more control so we switched to using JUCE. In FMOD it’s very easy to add VST plugins to channels or channelgroups (that’s how they are called in FMOD).

In our app we have three levels on wich we would want to apply VST plugins. On the Master Output, on the Tracks and the Clips playing in those tracks.
Im using the MixerAudioSource class to create groups and tracks.
So this MixerAudioSource class would seem the perfect place to build plugin support into.
Do you think this is a sensible approach or should i look into a completely different direction.

I took a look at the PluginHost and all the filters (plugins) are processed through the audioDeviceIOCallback. This doesn’t seem to a good solution for us because all plugins are processed on the master output.

So any help on this matter is appreciated.

Edwin

What I’m moving towards is using AudioProcessor objects to implement this sort of thing - hopefully I’ll be able to phase out AudioSources eventually, as AudioProcessors will do the same thing, but better. The latest version of the host uses the AudioProcessorGraph class, which is worth taking a look at.

Hi Jules,
I took a look at the AudioProcessor class and the PluginHost in the subversion tip. The PluginHost is an application wich works in a very generic way, so i can’t easily substract the correct way of using the AudioProcessor class.

How would i use this AudioProcessor class in the the situation described in my previous post. I guess the situation is simply the following, how to apply a VST (effect) plugin to an AudioSource using this AudioProcessor class?

I think most people will simple be interested in playing back audiosamples with one or more VST plugins (or AU’s for that matter) attached.

Any thoughts on this?

It’ll be able to do all that stuff when I’ve written AudioProcessors that can play files and stuff. You’ll basically stick them all inside a graph object, and tell it which channels to connect together (same as the graph in the host app), and the whole thing will play. This should be more flexible than using audiosources, which have to be chained together in sequence.

It’s all still a work-in-progress though, at the moment.

Hi Jules,

We are at a point that we start to worry a bit about in which direction to go.
Tiil now we used the ResamplingAudioSource and MixerAudioSource classes and a bunch of others in our audio engine, but i’m afraid that if we put a lot of time in this and you release your new AudioProcessor classes we need to rebuild a lot.

You talked about these AudioProcessor classes that playback files and such you are working on but they will not be available soon i guess, could you give an estimate (worst case scenario) when you will release those new set of classes? Please be honest, it doesn’t matter if you say next year but then we at least know what we can expect.

My second question is, is there a way we can use the AudioProcessor (AudioPluginInstance) together with the classes that allready exist?
I guess everything is possible of course, but any pointer would be greatly appreciated.

cheers,
Edwin

Can’t give you a date or anything, it’s just an ongoing thing that I’m working on. Of course if you’re using AudioSources you can carry on - if you want me to leave the code in there then that’s no problem, but AudioProcessors will solve a few problems that AudioSources can’t. (They’re not very different, TBH, so would be easy enough to move over).

You could certainly just write a type of AudioProcessor that takes an audiosource and plays it - that’d be quite a simple wrapper class.