Is a one-second audio buffer 88200 samples in length (if sampleRate is 44100) - and where does samplesPerBlock come into the equation?

I’m trying to make another AudioBuffer 1 second long.

Am I right in thinking that the audio buffer would be 88,200 samples in length.
That is 44100 * 2 channels L&R.

What is the significance of samplesPerBlock, - where does that fit into the equation?

1 second of samples is equal to that of the sample rate. So, if your sample rate is 44100 Hz (aka the number of samples per second!), that’s 44100 audio samples. This is irrespective of your channels.

Nope. Channels are parallel to each other, so 1 second in a channel is 1 second in all channels.

That’s how many samples per audio callback, as in how many samples your audio device requests to “play it back”. Devices don’t ask for 1 second at a time (assuming the intention is to hear it in real-time). Processing on that many samples at once takes a considerable amount of time, so you need to process in much tinier chunks.

1 Like

Oh, ok, so the value of the samples from -1 to 1 indicates where it is in terms of L & R?
Then I wonder why the processBlock() has to iterate through one channel at a time - whats the reasoning behind that? In the buffer, does it go though each value that is between 0 and 1 first, then each value thats -1 to 0?
image

an audio buffer object with two channels has two sets of memory locations to read from that are both mapped to the same sample indices.

so if you have an audio buffer that is 100 samples long with 2 channels, then accessing the memory for sample #50 in channel 0 accesses a different memory location than asking for sample #50 of channel 1. They’re completely different numbers stored in different places, even though for ease of use of the buffer object, we call them both “sample #50”.

the actual value of the samples from -1 to 1 is the audio’s amplitude, which has nothing to do with the sample’s stereo placement.

@benvining
interesting , thanks…
Just wondering: if all thats being sent to the plugin from the DAW are audio values between -1 and +1 (which represent the amplitude only), then how do we discern stereo field information?

By using multiple channels. Imagine 1 wire going from your amp to your speaker to represent the Left channel. The same idea applies to an audio buffer: it’s an array of channels.

For example, an audio buffer of 2 channels will likely mean Left and Right channels (ie: we’re in a stereo configuration), so to access the Left channel you would request index 0 from the buffer.

[2 channels 44100]
So, an AudioBuffer of one second is not 44100 floating point values in length, its actually 88200 floating point values. But for ease of use, sample# 1 in the AudioBuffer actually can reference two floating point values for channel0 and channel1.

If the sample rate is 44100, then yeah, that’s one second worth of audio. But looking at the total samples as a whole, num channels * sample rate, doesn’t really mean anything except for how much you’ve allocated.

The data per channel is deliberately separate so as to operate on audio data in a straightforward way.

Well, if you’re working in standard stereo, then channel 0 of the buffer will be the left signal and channel 1 will be the right signal.

Any kind of situation with more than 2 channels is non standardized and will get much more complex very quickly…

1 Like

thanks, I think I got it now.
An AudioBuffer object then is laid out like this:
[channel 0 sample #1 value],[channel 1 sample #1 value],[channel 0 sample #2 value],[channel 1 sample #2 value],...
or
[channel 0 sample #1 value],[channel 0 sample #2 value],...[channel 0 sample #44100 value],[channel 1 sample #1 value],[channel 1 sample #2 value],...[channel 1 sample #44100 value]

Kind of like the second one, but you can’t assume anything about the data being laid out linearly in memory, the channels following each other. You have to consider the channels as separate buffers because they are separately allocated.

Channel 0 : c0s0 c0s1 c0s2 c0s3...
Channel 1 : c1s0 c1s1 c1s2 c1s3...
...
Channel 7 : c7s0 c7s1 c7s2 c7s3...

I think you’re overcomplicating this for yourself.

To resize a juce audio buffer to hold 1 second worth of samples, you would call setSize (2, 44100). This tells the buffer, I want 2 channels, each with 44100 samples.

Then, when doing your audio processing, you can write any code you want that references sample #s 0 to 44099 (ie, “numSamples-1”) and run that code on any channel of the buffer. Because every channel will have a sample 0, a sample 1, a sample 2… All the way up to 44099.

When speaking about buffer length, we typically give the number of channels and the number of samples in EACH channel. So, for a 1 second long 2 channel buffet, it is wrong to say that it is 88200 samples long – the reason why is that the # of samples you can actually access and actually use on each channel is 44100 - NOT 88200. So attempting to access any index greater than 44099 will give you an addiction failure or a seg fault.

The takeaway:
A one second audio buffer at sample rate 44.1 kHz has a length of 44100 samples. That is how many samples are in each channel of the buffer.

*assertion failure haha

Excuse typos, I’m on mobile 💁🏻‍♀

@xenakios @benvining @jrlanglois
:+1: