Audio Classes

Would any kind soul mind giving me the 2 minute version of what audio classes I should use now to add playback to an app?

The samples will be coming from libquicktime, and I can get them as 16-bit int or float. I want to play them out either as untouched channels or stereo mixed (possibly both) and then mix between two streams. Output will be as samples again or too attached audio interfaces/outputs.

There’s just a few too many classes for me to see where to start, and I know there’s the whole plug-in host set too.’



Hi Bruce

This may be a starting point for you:

Its a small sample project which streams a wav file with with buffering and sample rate correction using the juce classes:



Mmm, thanks but not really, no.

My read on it is that an AudioDeviceManager has a ‘callback’ which can be an AudioSourcePlayer, and an AudioSourcePlayer has a source that can be an AudioTransportSource and an AudioTransportSource has a source which can be a AudioFormatReaderSource and it’s about there my head explodes.

I could construct a chain like that, sure, but seeing where in there a ‘mixer’ might go for instance is not clear to me.

And then there’s the new AudioProcessor stuff to factor in. If someone would be so kind, I’d like a hint on what sub-class each stage of this might benefit from:

Handle samples from a file:

Convert to 48Khz:

Create a stereo mix from those samples, adjust the gain of each and pass on as stereo plus x channels:

Mix together two streams as above (this is a cross-fade):

Optionally do final level adjustments, maybe sweetening:

Output as samples to a custom interface, and to selected outputs (sound card):

I think I’m adrift a bit because I was expecting to see something that paralleled a physical audio system.


The new AudioProcessor classes are more like the sort of thing you’re after, but I’ve not finished writing all the types of processor that you need (wave file readers, etc). I agree it’s all a bit complicated, but when this is done it should all be a lot more understandable. (And efficient).

Well, I have the raw samples from libquicktime. I just need sample rate conversion (possibly) and mixing and output. Are they ready for that?


No, not yet - stick to the audiosources for now…

Which takes us back to Doe, a deer etc.

The Audio Sources and digging around is what started this. Is there a 3 sentence version of how they work?


Do you think these classes rate as ready yet Jules?


I’ve been deep in cocoa stuff lately, haven’t done any more work on them…

Use an object derived from AudioIoDevice to hook into the audio stream flowing in or out of your computer. The method that gets called once per sample period is

Audio coming into your audio is stored in inputChannelData, you write data to be rendered by
your audio card into outputChannelData.

Juce currently has three built-in objects derived from AudioIODevice: AudioSourcePlayer, AudioProcessorPlayer, AudioFilterStreamer. You can also derive your own class.

AudioSourcePlayer is used to “play” anything derived from AudioSource. It does this by calling the virtual method

AudioSourceChannelInfo contains an AudioSampleBuffer which is useful for doing DSP (they can be mixed, gain or pan applied, etc.).

AudioProcessorPlayer also is derived from MidiInputCallback, so it is useful if your application includes midi. Otherwise it is similar to AudioSourcePlayer, except that it plays AudioProcessor derived objects instead of AudioSource ones. It does this by calling

AudioProcessor and AudioProcessorPlayer are meant to be used as part of Jules new plug-in framework that is currently under development. But these classes work already and you can see them in action as part of the Demo AudioHost project.

I’m not sure about the AudioStreamingFilter since I’ve never used it. It seems to be an AudioProcessorPlayer with an AudioPlayHead added. The documentation implies it is intended to play a single AudioProcessor–but you could play multiple AudioProcessors by just specifying an AudioProcessorGraph object as your AudioProcessor. The AudioPlayHead is intended to specify playback position that can be shared by multiple AudioProcessors to sync their playback. Normally this is supplied by host (through the Plug-in wrapper). The AudioStreamingFilter supplies this so maybe it’s meant to be used as a minihost, say to hardwire a plugin into one’s application.

Thanks for that. I’ve mostly worked out the old school approach, and I’m getting half decent results, in surprisingly little time.

Now I just have to fight with the thread interactions between my other access and the audio threads.



Useful example, obviously fails on a Mac without some tweaking, though. For info (for others following the thread) I changed the if…else block from line 113 in MainComp.cpp to:

[code] else if (buttonThatWasClicked == textButton2)
//[UserButtonCode_textButton2] – add your button handler code here…
FileChooser myChooser (“Please choose a file to load…”,
File::getSpecialLocation (File::userHomeDirectory),

	if (myChooser.browseForFileToOpen())
        File myFile (myChooser.getResult());
        wpc.openFile (myFile);


Along with the changes to the AudioDeviceSelectorComponent, so for the AudioDeviceSelectorComponent constructor from line 53 in SimpleAudio.cpp

AudioDeviceSelectorComponent audioSettingsComp (adm, 0, 0, 0, 6, false, false, true, false);

Maybe I am missing, but is there an easy way to apply panning to an AudioSampleBuffer, similiar to applyGain.


You can apply gain to individual channels of the buffer so

applyGain (int channel, int startSample, int numSamples, float gain) noexcept

So.. thats panning really.

One way to do that is, for a variable called "panning" between 0 and 1, to apply the gain (1 - panning) to the left channel, and the gain (panning) to the right channel.

Well.. there's a little bit more to it :) You can choose different methods to pan the signal:

My favorite is equal power panning law, it's the one I think keeps the perceived volume constant.