I’m working on a little desktop application to play back a set of audio files. Currently it sets up several instances of a modified version of the AudioDemo class from JuceDemo, connects the AudioTransportSources to a MixerAudioSource, connects the Mixer to the AudioSourcePlayer, and AudioSourcePlayer to AudioDevice. This all works beautifully.
I’m interested in doing some processing on the AudioTransportSources before they reach the Mixer, as well as processing the composite stream after it’s mixed. For now, I’d like to do a couple of simple things like altering the volume and panning of the individual and composite streams. Maybe also compute the RMS to display a simple VU meter.
I realize I could handle volume using AudioTransportSource::setGain, but I’m instead wanting to set up a framework with which I could later do some more complex processing.
How do you suggest that I go about this so as to properly fit into the Juce model? Should I subclass AudioTransportSource and MixerAudioSource, and, in each, reimplement getNextAudioBlock() ? Or is there some sort of AudioSource filter mechanism I’m not currently seeing?
Any suggestions would be most appreciated. Thanks.