Do you mean inputs from your audio card and outputs to the audio card? In that case an AudioSource won’t work–it’s for audio that generated elsewhere, for example from a file or stream or generated directly using, say, the ToneGeneratorAudioSource. The audio from the sound card finds its way into your app by associating an instance of AudioDeviceIOCallback with the instance of AudioDevice representing your sound card. This is usually done through the AudioDeviceManager.
If you don’t want to write your own AudioDeviceIOCallback (not hard, but low level), then you can use an AudioSourcePlayer to “play” objects in the AudioSource hierarchy or an AudioProcessorPlayer to play objects in the AudioProcessor hierarchy. Both of these objects are derived from AudioDeviceIOCallback and override the audioDeviceIOCallback method which pumps audio in and out of the system:
An object that implements AudioIODeviceCallback, such as AudioSourcePlayer or AudioProcessorPlayer gets called periodically approximately every blockSize / SampleRate seconds where blockSize is the buffer size specified when the AudioIODevice was set in the AudioDeviceManager. This object then reads sample data from inputChannelData, processes the data in some way and writes the result to outputChannel Data.
AudioSourcePlayer
Juce supplies two classes derived from AudioIODeviceCallback: AudioSourcePlayer and AudioProcessorPlayer that “play” AudioSources and AudioProcesses respectively. The AudioSourcePlayer plays its attached AudioSource by calling its getNextAudioBlock method which has the following prototype:
AudioSourceChannelInfo consists of an AudioSampleBuffer and two ints: startSample which specifies where the AudioSource should put the first sample and numSamples, which specifies how many samples to put in the buffer. I won’t go into the detail of AudioSampleBuffer here, just to say it is the heart of any DSP or mixing. There are methods to copy between buffers, apply gain or pan, determine levels, get samples in or out of the buffer, and read/write to a file or stream. Get to know it well.
AudioProcessorPlayer
An AudioProcessorPlayer, plays its attached AudioProcessor by calling its processBlock method which has the following prototype:
Although similar to getNextAudioBlock, processBlock is bi-directional. Thus AudioProcessorPlayer places data for the AudioProcessor to play in buffer (and Midi messages in midiMessages, but I’m skipping Midi for this discussion), calls processBlock, and then reads or copies the processed audio from buffer. Data then gets to the audio out port by copying it to the outputChannelData argument of audioIODeviceCallback. The actual method of how input and output channels are mapped in the AudioSampleBuffer is a bit tricky, but well explained in the comments for AudioProcessor::processBlock.
AudioSources or AudioProcessors or roll your own, Which Do I Use
Jules has made it clear in many posts that the AudioProcessor architecture is the future of audio in Juce. It is more modular that the AudioSource architecture in the sense that it can incorporate or be incorporated into 3rd party software through the use of plug-ins. Then there is the plug-in framework built around the AudioProcessorGraph and AudioPluginInstance classes. The AudioProcessorGraph class is used to wire together AudioProcessors into an audio processing network of arbitrary complexity. The AudioPluginInstance is used to incorporate external plug-ins such as VSTs on Windows or AudioUnits on the Mac. There’s also built in Midi processing which has to be added, via MidiInputCallback or the MidiMessageCollector if using the AudioSource hierarchy.
On the other hand, the AudioSource architecture includes more objects that do something to the audio. For example, there is the AudioFormatSourceReader to stream in audio from a file, or the MixerAudioSource to mix several sources together, or the AudioTransportSource which can be started, stopped, and moved to an arbitrary position in the stream.
The problem with the AudioSource architecture is that although the built in objects can be wired together to build a simple file player or non-midi controlled synthesizer, you have to crack open or derive new objects to add functionality. For example, the MixerAudioSource can’t mix incoming audio with its attached AudioSources, and you’d have to write your own AudioDeviceIOCallback derivative to capture audio to a file, where as these can be done in the AudioProcessor framework by wiring together plug-ins that may not have even been created in Juce. Furthermore, the wiring can be done by the user at run time (see the Audio Plug-in host demo for an example of this).
Finally, you can do audio processing directly in audioIOCallback and skip the need for the Audio classes (but you’re giving up alot of functionality!). As an example, here’s the audioDeviceIOCallback for an AudioLoopback class that just copies input audio to output. Note it is not industrial strength as it assumes that data is packed contiguously into the lowest channels. Thus, for stereo, it would assume data in channels 0 and 1. You can break it with a multichannel device if you disable the lower channels. For example, if you use the Line 6 TonePort GX device, which allows the user to select 2 of 4 channels, and select channels 3 and 4, you will crash this code. But it does show the basic processing.
void AudioLoopback::audioDeviceIOCallback (const float **inputChannelData,
int totalNumInputChannels,
float **outputChannelData,
int totalNumOutputChannels,
int numSamples)
{
for(int i = 0; i < totalNumInputChannels; i++)
{
if(i < totalNumOutputChannels && outputChannelData[i] != 0)
{
if(inputChannelData[i] != 0)
{
// have an output channel that wants data, so give it to it
memcpy(outputChannelData[i], inputChannelData[i], sizeof(float)*numSamples);
}
else
{
// fill with zeros
zeromem(outputChannelData[i], sizeof(float)*numSamples);
}
}
}
// zero extra outputs
for(int j = totalNumOutputChannels; j < totalNumInputChannels; j++)
{
if(outputChannelData[j] != 0)
{
zeromem(outputChannelData[j], sizeof(float)*numSamples);
}
}
}
Finally, if you go with the AudioProcessor hierarchy, be sure to check out the Audio Host and Plugin example code as careful study of these programs will answer a lot of your questions.