Would any kind soul mind giving me the 2 minute version of what audio classes I should use now to add playback to an app?
The samples will be coming from libquicktime, and I can get them as 16-bit int or float. I want to play them out either as untouched channels or stereo mixed (possibly both) and then mix between two streams. Output will be as samples again or too attached audio interfaces/outputs.
There’s just a few too many classes for me to see where to start, and I know there’s the whole plug-in host set too.’
My read on it is that an AudioDeviceManager has a ‘callback’ which can be an AudioSourcePlayer, and an AudioSourcePlayer has a source that can be an AudioTransportSource and an AudioTransportSource has a source which can be a AudioFormatReaderSource and it’s about there my head explodes.
I could construct a chain like that, sure, but seeing where in there a ‘mixer’ might go for instance is not clear to me.
And then there’s the new AudioProcessor stuff to factor in. If someone would be so kind, I’d like a hint on what sub-class each stage of this might benefit from:
Handle samples from a file:
Convert to 48Khz:
Create a stereo mix from those samples, adjust the gain of each and pass on as stereo plus x channels:
Mix together two streams as above (this is a cross-fade):
Optionally do final level adjustments, maybe sweetening:
Output as samples to a custom interface, and to selected outputs (sound card):
I think I’m adrift a bit because I was expecting to see something that paralleled a physical audio system.
The new AudioProcessor classes are more like the sort of thing you’re after, but I’ve not finished writing all the types of processor that you need (wave file readers, etc). I agree it’s all a bit complicated, but when this is done it should all be a lot more understandable. (And efficient).
Use an object derived from AudioIoDevice to hook into the audio stream flowing in or out of your computer. The method that gets called once per sample period is
Audio coming into your audio is stored in inputChannelData, you write data to be rendered by
your audio card into outputChannelData.
Juce currently has three built-in objects derived from AudioIODevice: AudioSourcePlayer, AudioProcessorPlayer, AudioFilterStreamer. You can also derive your own class.
AudioSourcePlayer is used to “play” anything derived from AudioSource. It does this by calling the virtual method
AudioSourceChannelInfo contains an AudioSampleBuffer which is useful for doing DSP (they can be mixed, gain or pan applied, etc.).
AudioProcessorPlayer also is derived from MidiInputCallback, so it is useful if your application includes midi. Otherwise it is similar to AudioSourcePlayer, except that it plays AudioProcessor derived objects instead of AudioSource ones. It does this by calling
AudioProcessor and AudioProcessorPlayer are meant to be used as part of Jules new plug-in framework that is currently under development. But these classes work already and you can see them in action as part of the Demo AudioHost project.
I’m not sure about the AudioStreamingFilter since I’ve never used it. It seems to be an AudioProcessorPlayer with an AudioPlayHead added. The documentation implies it is intended to play a single AudioProcessor–but you could play multiple AudioProcessors by just specifying an AudioProcessorGraph object as your AudioProcessor. The AudioPlayHead is intended to specify playback position that can be shared by multiple AudioProcessors to sync their playback. Normally this is supplied by host (through the Plug-in wrapper). The AudioStreamingFilter supplies this so maybe it’s meant to be used as a minihost, say to hardwire a plugin into one’s application.
Useful example, obviously fails on a Mac without some tweaking, though. For info (for others following the thread) I changed the if…else block from line 113 in MainComp.cpp to:
[code] else if (buttonThatWasClicked == textButton2)
{
//[UserButtonCode_textButton2] – add your button handler code here…
FileChooser myChooser (“Please choose a file to load…”,
File::getSpecialLocation (File::userHomeDirectory),
"*.wav");
One way to do that is, for a variable called "panning" between 0 and 1, to apply the gain (1 - panning) to the left channel, and the gain (panning) to the right channel.
Well.. there's a little bit more to it :) You can choose different methods to pan the signal: http://en.audiofanzine.com/recording-mixing/editorial/articles/panning-laws-revealed.html
My favorite is equal power panning law, it's the one I think keeps the perceived volume constant.