I think that you are overlooking an important property of the dsp::AudioBuffer here that makes it impossible by design to expose a raw float** pointer:
The AudioBlock offers you a getSubBlock function which allows you to take an arbitrary sub-block of the original block, starting at the nth sample. This gives you great options, one of my favorite is something like this, where you need to make sure that certain parts of your processing code might not be called with a block size greater than a certain limit:
void processBlock (const juce::dsp::AudioBlock<float>& block)
{
if (block.getNumSamples() > maxBlockSize)
{
auto a = block.getSubBlock (0, maxBlockSize);
auto b = block.getSubBlock (maxBlockSize);
processBlock (a);
processBlock (b);
return;
}
// if we get here, we can be sure that the block size is <= maxBlockSize
// do processing...
}
Now, since the dsp::AudioBlock is guaranteed to be a super lightweight class, did you ever wonder how the sub-block thing works? The answer is that the class has basically 4 members, which are:
- The channel count
- The sample count
- A pointer to an array of pointers, referencing the start of the referenced sample buffer per channel
- A start sample offset
The implementation of getChannelPointer looks like this:
SampleType* getChannelPointer (size_t channel) const noexcept
{
jassert (channel < numChannels);
jassert (numSamples > 0);
return channels[channel] + startSample;
}
So you see, that the channels array doesn’t describe the whole channel buffer, it’s the channel buffer plus the startSample offset. This can not be exposed as a simple float** and that’s probably the reason why there is no such interface. Now this is a bit of a concept shift in contrast to old-school buffers, but the same sub-buffer thing wouldn’t have been possible with the old AudioBuffer, wich is a lot heavier data structure that also handles memory management etc. along the way. With the AudioBlock you also have great options to implement sample buffers without using an AudioBlock at all, so there isn’t necessarily any internal AudioBlock to expose at all in some classes. To me the AudioBlock is way more flexible and suited for modern C++ dsp code than the AudioBuffer
This is probably a good decision. I did that with all of our legacy dsp codebase approx 1 1/2 years ago and never looked back. Took me approx. a day of work, which is okay if you want to make your codebase future proof. If you don’t interface with any third party libs that expect a float**, you’ll likely find out that passing an AudioBlock wherever possible and using getChannelPointer in the most inner loops is basically all you need. And this is also a non-breaking change, if you just swap out your AudioBuffer<float>& arguments with const dsp::AudioBlock<float>&arguments. If you declare them as const reference they implicitly cast from an AudioBuffer, so the surrounding code will compile as before. Just make sure that you declare them const – even if you want to write to them. This seems a bit unintuitive at first, but with the dsp::AudioBlock the difference between a read-only and writable buffer is made by declaring the sample type const or not, so a const dsp::AudioBlock<float> will be writable, a const dsp::AudioBlock<const float> will be read-only.
Hope this helps you a bit with your transition 