AudioSampleBuffer - Ultimate Topic


Hi, people. Doing a search for “AudioSampleBuffer” I just found 27 topics. None of them appears to explain the uses of this class deeply.

So I’m creating this topic to group important information about how does it works and how can we get things from AudioSampleBuffer in a common sense of the term buffer.

For example: is documented in Juce Documentation what happens if I read some block of data from an AudioSampleBuffer? I think not. In my (common) sense of the semantics of a buffer, data that had been read should be consumed, then the buffer should be freed from that data, just like a queue with a fixed size.

That is, I would expect that the older block just read was erased and the head pointer of the buffer should be incremented in just_read_block_size positions. In the other hand, the block just written into the buffer should be added starting from the num_allocated_samples position.

When the buffer was full, it would throw an exception or just let the programmer to control the buffer overflow himself.


This is semantics. What you’re talking about is a buffer which grows with the samples you put in it, and shrink with the ones you extract (this is normally called a ring buffer), whereas the AudioSampleBuffer simply is a block of samples that you pass around. Some objects might write into the block, others will read from it. The number of samples in it stays the same though. The name AudioSampleBuffer might be a little bit confusing, and I think AudioSampleBlock would be more appropriate. Or even AudioFrameBlock since a frame can contain several samples (all at the same instance in time).


Thanks for answering, robiwan.

I was talking about a buffer which has a fixed size, eg x samples.
So I could fill this buffer until it reaches an occupation of 100%. When I read it, the occupation decreases. The programmer would not deal with buffer managament when reading since the correct reading position would always be 0. When doing a write which would cause a buffer overflow, an exception would be thrown.

For example, supposing the following buffer:

[ 800 samples ][ free space (200) ]

Doing a read of 500 samples, we would get this:

block = read(500);

[ 300 samples ][ free space (700) ]

In a writing operation we could have:

write(block); // block size is 1000 samples

This would throw an exception.

Isn’t it a nice buffer? :stuck_out_tongue:


Yes, I see why you’re confused now, it is just a naming thing. I’ve always understood a buffer to be just a generic holder for samples, (which is all that these are), and it’s a ring buffer that has a notion of how full it is. You could very easily write a wrapper class that turns an AudioSampleBuffer into a ring buffer.


Thanks for the answers. Now I’m convinced that I need a ring buffer or circular buffer, whatever. I’m looking for an efficient implementation at Google.


I’m not so successfull in trying to use a ring buffer between the audio stream (that comes from AudioIODeviceCallback::audioDeviceIOCallback()) and my “sound processing layer” which requires a fixed length array of samples. I’m getting strange feedbacks.

On my application, I should need something like this:

audio stream -> ring buffer -> Fixed length reads -> Sound processing -> output to speakers.

Is there any structure I could use to guarantee a constant number of samples (set by the application’s user) to be sent to the sound processing module?



I’m not sure why you would need a ring buffer at all? Since the audioDeviceIOCallback always supplies a fixed length buffer of samples, you’d only need:

audioDeviceIOCallback (input) -> AudioSampleBuffer -> process your stuff (fixed length) -> AudioSampleBuffer -> audioDeviceIOCallback (output)

Or expressed in semi-pseudo code (assuming stereo):

[code]void AudioClass::audioDeviceIOCallback(const float** inputChannelData, int totalNumInputChannels, float** outputChannelData, int totalNumOutputChannels, int numSamples)
AudioSampleBuffer inbuf(inputChannelData, 2, numSamples);
AudioSampleBuffer outbuf(outputChannelData, 2, numSamples);

processAudioData(inbuf, outbuf);

void AudioClass::processAudioData(const AudioSampleBuffer& in, AudioSampleBuffer& out)
// Tweak data to your hearts content

Should you need another block size of samples in your processing code, you’d need a loop in audioDeviceIOCallback to break up its block size to what you need.


Thanks! I’ll try it.


Hi at all,

I would like to know, who sets the param of audioDeviceIOCallback?

I want to allocate a temporary buffer ( float** rawOutputChannelData;) for processing issues (I got a resampling function, which doesn´t allow InPlace-Processing).
But to allocate correctly, I need to know the params. And therefore I need to know, who/which function has set these and when?

Due to the fact, that one can change the sample buffer within “show audio settings”, I need to know the actual sample buffer, which the user has chosen.



You get that info in the audioDeviceAboutToStart callback. The device given as parameter has getCurrentSampleRate() and getCurrentBufferSizeSamples(). Just init a AudioSampleBuffer with that number of samples and use it in your audioDeviceIOCallback function


Thank you very much, robiwan

“getting” them worked for me!

1.) But for deleting (freeing) my AudioSampleBuffer (here: float** rawOutputChannelData) I need totalNumOutputChannels outside the audioDeviceAboutToStart callback (my assumption: within the audioDeviceStopped() callback.). Any Idea how to access them, because the *device-Pointer is not available there?

2.) What about the other param int totalNumOutputChannels and float** outputChannelData? I even don´t use the “AudioIODevice::open”-Method.

thx again, for any help!

  1. Why would you need that? AudioSampleBuffer owns its own data so you don’t need to free anything. Should the user change the buffer size, it’ll just resize, again no need to free up anything. And if the AudioSampleBuffer is a member of your callback impl (which it should be), the data will be freed on destruction. Am I missing something?

  2. What about them? Since outputChannelData is a vector of float arrays, you need totalNumOutputChannels to know how many of them there are.

  1. OK, i misunderstood
    I thought, “AudioSampleBuffer” is the name you gave to my Pointer float** rawOutputChannelData! But I see, this is a own class in Juce.
    Ok, so I can use an AudioSampleBuffer-Object as a temporary Variable like I want and there is no need to init and allocate my own Buffer float** rawOutputChannelData ? Did I understand it right?

  2. I misrepresented my 2. Question, it was related to my origin Post:
    Where can I get the parameters outputChannelData and totalNumOutputChannels? Since leaving the audioDeviceIOCallback, those parameters aren´t accessable anymore.

  1. Yep, you can use AudioSampleBuffer as a temporary variable, but rather have it as a member variable of your class so that you don’t get malloc/frees in each audioDeviceIOCallback call. You can call clear() on the AudioSampleBuffer in each call though (to zero out each sample of the buffer)

  2. outputChannelData specifically is only valid in the context of the audioDeviceIOCallback function. It contains the pointers to the buffers that actual outputs sound to your soundcard.


1)[quote=“robiwan”]1) (…) so that you don’t get malloc/frees in each audioDeviceIOCallback call. [/quote]
I wanted to malloc my temp variable during audioDeviceAboutToStart and free it during audioDeviceStopped, not in audioDeviceIOCallback. This has the advantage, that (if one user its own buffer, like I originally wanted) the actual buffersize will always be available.
And using the audioDeviceAboutToStart and audioDeviceStopped callbacks will help me, to alloc and free my pointer only then, when buffer size has been changed.

But how then can I access this pointer outside of this audioDeviceIOCallback? And what about totalNumOutputChannels ?

What do I want?
I just want to process my data (read from an audio file). I have an external library, which does some timestretching. And the result is, what I want to playback. Thats all!

My Idea:
audioDeviceAboutToStart: Here I alloc my own temp variable, float** rawOutputChannelData

Here audioSourcePlayer.audioDeviceIOCallback writes the read audio-data to my temp var rawOutputChannelData. Then I call my Timestrech-Method with parameter rawOutputChannelData as a source-pointer and outputChannelData as my destination-pointer, ´cause this is the pointer which my soundcard will read from.

: Finally free my own temp-variable rawOutputChannelData

That is, why I need all those parameters: numbers of samples(buffers), numbers of channels totalNumOutputChannels and the vector outputChannelData

EDIT: Is there a way to get totalNumOutputChannels ?


Why on earth would you want to??

audioDeviceIOCallback is called, it gives you some temporary input and output buffers, you read the input, process it in whatever way you want, leave the results in the output, and the function ends. The data would be meaningless in any other context.

And I hope you mean “pitch-shift” rather than “time-stretch”, because time-stretching a continuous realtime audio signal would certainly be impressive - especially if you manage to speed it up!


The external lib/SDK provides me methods for time-stretching and pitch-shifting.

referring to outputChannelData:

I want to process (time-strecht and/or pitch-shift) my audio-data, which I´ve previously read from an audio-file. I need to save temporarly the data, which audioSourcePlayer.audioDeviceIOCallback delivered for me. Then process them and write the results into outputChannelData, which then will be read by my audio-device.

And still remains: how can I get the current totalNumOutputChannels?

You all sound to me, as if I´m doing here totally wrong. What exactly causes you to that??


Sorry, but you do sound a bit flummoxed! :wink:

If you need to know the number of channels to expect, just use AudioDevice::getActiveInputChannels() or getActiveOutputChannels(). You can call them in your prepareToPlay method.


this is not the correct way to do it. you have to write a special type of AudioSourcePlayer and put time-stretching and pitch-shifting operations in that, as the audio source player have already allocated the data for the file you are trying to play. You don’t need to have another audio source and collect previously played data, unless you want a realtime effect processor and not a sample player with time stretching and pitch shifting.


I don´t want to confuse you. Just wonder, why nobody likes my idea. In my n00b eyes, this seems to me the straightforward-soloution. I am a “JUCE Weenie”, I would like to know, what I do wrong and how to make this better?

This returns “only” a BitArray. Is there already a handy way, how to count the number of ones in it?

Hmm…Is there a difference calling them in audioDeviceAboutToStart callback? Because I am using the JUCE-(Audio-)Demo as a “template”.