Audio Capture from Input Device: any examples?

After successfully coding a straight audio loopback (in to out) using AudioIODeviceCallback, I attempted to record the stream from the input device. I almost got it to work by writing the input passed to audioIODeviceCallback to an AudioFormatWriter, but the instructions in the documentation for AudioFormatWriter::write (that is: cast float** to int**) doesn’t work. Doesn’t stereo audio need to be interleaved first? And the straight float to int cast doesn’t make sense: don’t you have to do a multiplication, e.g. int output = input[iChan][iSample] < 0 ? int(input[iChan][iSample] * 32767) : int(input[iChan][iSample] * 32768)? The recording I got kind of reproduced the input, but it was all square waves, just what you’d expect from a straight cast.

Second problem is I’m doing the file write during the high priority thread in audioIODeviceCallback and from what I’ve read, the recommended way to do this is to fill an AudioBuffer and write that during a second, normal priority thread. How do you synchronize access to buffer. More importantly, won’t the high priority thread overwrite the buffer before the normal thread writes it? Do I need to use a ringbuffer or a double buffering scheme similar to DirectSound? I’ve looked for examples in the forum that write audio to file, but the ones I’ve found either don’t do it in realtime or if they do, just copy input streamed from a file, not from a device. Any help or an example will be greatly appreciated.

Sorry, don’t have any example code, but check out AudioSampleBuffer::writeToAudioWriter, and yes, you certainly shouldn’t do any file i/o in the audio thread! Just set up a background thread, and use a circular buffer to pass data to it.

Jules, thanks for the suggestion; audio capture is working. I can even stream it to an Ogg file in real time, something that didn’t work when I was streaming to a file from the audio thread.

Hi,

SO your code actually read the midimessages or it records the actual sound and stores it as data?

Is it ok if you share this code with us. I’m trying to read incoming midimessages from a midipiano. I guess ur code could be usefull.

Regards,

No I’m just storing audio data; I’m not interested in MIDI at the moment. The code reads in an ogg vorbis or wav file and simultaneously records the audio input to an ogg vorbis or wav file. The audio input and file being played back are mixed and then sent to the audio out. What I’m going for is a multitrack recorder: previous tracks are played back while new ones are recorded.

What’s cool about ogg vorbis is it doesn’t distort the timing of the samples so that sample number remains invariant through the encoding/decoding process. This makes it real easy to sync multiple tracks of ogg vorbis files, something that MP3 can’t do.

I’d be happy to post the code, but unfortunately I just moved and don’t have an Internet connection yet. I’ll try to post it in a day a two.

Thanks a lot. appreciate it.

Ive been trying to do a simple programming that listens to piano midi messages and does something with the recieved midi message. unfortunately m new to juce and c++ and im having some basic problems.

Hopefuly ur code could be useful in some way to understand how to use the juce functions and so forth.

Thanks again

1 Like

The code examples below demonstrate how to do audio capture from the input device while playing back an audio stream. There are four examples. The first is the audioIOCallback method from a class, AudioPlayer, that wraps an AudioFormatReader and AudioTransportSource. It reads the numSamples from the transport, mixes it with the audio from the input device and then calls the audioIOCallback of the AudioRecorder class to capture the input audio to disk:

[code]void AudioPlayer::audioDeviceIOCallback (
const float **inputChannelData,
int totalNumInputChannels,
float **outputChannelData,
int totalNumOutputChannels,
int numSamples)
{

// for now assume #input channels = #output channels
// read data from file
AudioSampleBuffer buffer(totalNumInputChannels, numSamples);
AudioSourceChannelInfo info;

info.buffer = &buffer;
info.numSamples = numSamples;
info.startSample = 0;

transport.getNextAudioBlock(info);

// mix file data with live input
for(int i = 0; i < totalNumInputChannels; i++)
{

	for(int j = 0; j < numSamples; j++)
	{
		// mix input + transport
		float sample 
			= inputChannelData[i] != 0 
				? inputChannelData[i][j] 
				: 0.0f;
		sample += *buffer.getSampleData(i, j);
		// clip if out of range
		if(sample > 1.0) sample = 1.0;
		if(sample < -1.0) sample = -1.0;
		if(outputChannelData[i] != 0)
			outputChannelData[i][j] = sample;
	}
}

// send data to the recorder
recorder.audioDeviceIOCallback(inputChannelData, totalNumInputChannels, outputChannelData, totalNumOutputChannels, numSamples);	

// recorder.audioDeviceIOCallback((const float **)outputChannelData, totalNumInputChannels, outputChannelData, totalNumOutputChannels, numSamples);
}
[/code]

Note the commented out code at the end of the above method. This line records the file playing back as well as the live input. It is useful if, for example, you want to bounce tracks.

The next code listing shows AudioRecorder::audioIOCallback, the method that gets called during AudioPlayer::audioIOCallback to write the input data to disk.

[code]void AudioRecorder::audioDeviceIOCallback (const float **inputChannelData,
int totalNumInputChannels,
float outputChannelData,
int totalNumOutputChannels,
int numSamples)
{
// write to file if recording
if(isRecording())
{
AudioSampleBuffer buf((float
)inputChannelData,
2, numSamples);
_sink.write(buf);
samplesSoFar += numSamples;
}
}

[/code]

The above code is pretty simple, it just copies inputChannelData to an AudioSampleBuffer and then calls AudioSink::write which queues to data to a circular buffer which is then written to disk by a separate thread.

Note that the number of samples read from the AudioTransportSource is the same as the number of samples output to the AudioRecorder. This ensures that the live sound is synchronized to the record sound modulo the delay due to the playback getting to the musician’s ear and then the sound from the audio in to get to audioIOCallback. This is equal to the roundtrip delay that would be experienced if the audio in port were directly connected to the audio out port. This can be compensated for if desired: just determine the sum of the input and output latency and delay the input that long. However if the latency is low enough, it doesn’t matter, in fact, an experienced musician should be able to compensate for it. As an example, on my system the round trip latency is 3 milliseconds if I’m using ASIO4All and 4 milliseconds if I’m using a Novation X-Station as the audio interface. 3 milliseconds is approximately the time is takes to hear a sound 3 feet away; in a live performance one’s amp is further away than that.

The last two examples are the producer and consumer routines of AudioSink, a class wraps an AudioSampleBuffer treating it as a circular queue. Data is written to the queue (the producer) during the high priority audio i/o thread and drained from it (the consumer) and written to a file during an application thread running at normal priority. The circular queue’s state is tracked by three variables: iRead, the read index, iWrite, the write index, and curSize. Since only the producer (Recorder::audioIOCallback) updates and reads the write index and only the consumer (AudioSink::run) updates and reads the read index, iWrite, and iRead do not need to be synchronized. They are declared with the volatile keyword which forces the runtime to refresh the value rather than relying on a cached value. As long as iRead < iWrite the circular buffer is in a legal state. When the physical end of buffer is reached, these indices will wrap. A third variable, curSize, keeps track of how much data is actually in the buffer. Since it is shared both read and write access are protected by a mutex that belongs to the AudioSink class rather than a mutex locally declared in a method. This allows a thread in one method to block a thread in an entirely different method.

I’m currently using a buffer size of 65366 bytes in AudioSink, but this is probably more than I need. Disk writing on a modern Pentium class machine is faster than the audio streaming rate of 4 bytes every 1/44100 sec, but it is bursty and trying to write to disk the samples as you get them through the audioIOCallback doesn’t work. Thus incoming data needs to be buffered. Here’s the code that writes the data to the buffer. Recall that it is called by AudioRecorder::audioIOCallback.

bool AudioSink::write(const AudioSampleBuffer& source) { bool result = true; int numSamplesToWrite = source.getNumSamples(); int numChannelsToWrite = source.getNumChannels(); int maxSize = _buffer->getNumSamples(); int availableToWrite = getAvailableToWrite(); if(numSamplesToWrite <= availableToWrite) { if(numChannelsToWrite >= _buffer->getNumChannels()) { // source has more channels, write only as many // channels as we have for(int i = 0; i < _buffer->getNumChannels(); i++) { // check for end of buffer if(numSamplesToWrite + iWrite > maxSize) { // have to break up the write. int countTillEnd = maxSize - iWrite; int leftOver = numSamplesToWrite - countTillEnd; _buffer->copyFrom(i, iWrite, source, i, 0, countTillEnd); _buffer->copyFrom(i, 0, source, i, countTillEnd, leftOver); } else { _buffer->copyFrom(i, iWrite, source, i, 0, numSamplesToWrite); } } } else { jassert(true); // should never get here } // update buffer info iWrite = (iWrite + numSamplesToWrite) % maxSize; mutex.enter(); curSize += numSamplesToWrite; mutex.exit(); } else { result = false; } return result; }
Finally, here’s the code that actually writes the audio in to disk. This code is run on a normal priority thread.

void AudioSink::run() { int maxSize = _buffer->getNumSamples(); while(!threadShouldExit()) { int availableToRead = getAvailableToRead(); if(availableToRead > 0) { if(iRead + availableToRead <= maxSize) { _buffer->writeToAudioWriter(_writer, iRead, availableToRead); } else { // have to break up the read int countTillEnd = maxSize - iRead; int leftOver = availableToRead - countTillEnd; _buffer->writeToAudioWriter(_writer, iRead, countTillEnd); _buffer->writeToAudioWriter(_writer, 0, leftOver); } // update buffer info iRead = (iRead + availableToRead) % maxSize; mutex.enter(); curSize -= availableToRead; mutex.exit(); } else { // wait for buffer to fill double timeToWait = double(availableToRead)/_writer->getSampleRate(); sleep(int(timeToWait * 1000.0 + 0.5)); } } }
Finally, I want to emphasize that this is a work in progress and will probably toss this code and start from scratch with the lessons learned from the code described above. But before that I want to create a positionable version of the MixerAudioSource so that I can retrieve samples from multiple files and mix down to a stereo mix that can be used to monitor playback while recording additional tracks.

Hi and thanks for your example code which helped me a lot to do my first recording :slight_smile:
I now tried to scale your code, for I was trying to build a recorder that records all active outputs of my application to the according number of mono-WavFiles. The idea was to give the recorder an audioIODeviceCallback of it’s own and write the outputData to disk. In audioDeviceAboutToStart I create two arrays containing the files to write to and the outputStreams to them. The writing thread should then loop over all active channels, create a writer to the according stream from the second array. I just wondered if that is a good approach or if anyone has a better idea. Since I just started playing with c++ when i found JUCE, I’m not very experienced in coding…

Thx for your help,

Ingo