Simple question: If I have a multichannel audio buffer, will it be safe to call getReadPointer one after another for all channels it contains and then pass these pointers to different threads, that will read from these channels in parallel (–> one thread per channel, so no two threads will access the same channel data in parallel)?
There’s no guarantee - it depends where those channels came from!
In most cases it’s true that they’ll all point to different buffers and you’ll be safe, but it could be the case that e.g. some audio devices might take a mono input channel and provide two copies of the same one as if they’re a stereo pair.
…and of course in the case of the buffers that a DAW passes to a plugin, all bets are off. There are so many DAWs and situations in which the plugins could be used that there’s no way I could even guarantee that my own DAW would behave in the way you’re hoping!
If you’re only reading data, rather than writing, you’ll be OK.
Okay, I’m talking about a buffer that gets allocated through the AudioBuffer class itself, so I assume no problem here when only reading?
To clarify what I’m planning: I’m working on a multichannel real-time analyzer Component that owns this rather big buffer and fills it over multiple audio blocks by copying samples to it. When there are enough samples, every channel should be rendered in parallel on its own background thread, where an FFT will be calculated on this data and then some images will be rendered from the FFT output. In fact, while writing this, I think about calculating an in-place FFT or applying some windowing to the samples before calculating the FFT which then would result in write access too. Will this also be safe in this particular case?
Yes… If it really has been allocated by the AudioBuffer class then the channels will definitely all be distinct. However do note that you can create an AudioBuffer from a user-supplied array of pointers, in which case they could contain anything.
Yes, I’m aware of that. But in my case I call AudioBuffer::setSize on an AudioBuffer that was previously created by the AudioBuffer() constructor - so it won’t point to any user supplied memory.
Yep, in that case you’ll be fine.
Sounds like you’re doing some overlap-adding based processing? You can’t do that FFT in-place, if that’s the case, because you need those samples for the next frame of processing. You’ll have to copy a window of samples from your large audiobuffer into an FFT sized buffer, apply the window then and then perform the FFT in-place on that.
The audiobuffer class allocates memory contiguously so you might get some false-sharing on the boundaries of each channel. Not sure if that’s a big deal but you could avoid that by just allocating each channel on the heap yourself.
No, I don‘t do any OLA processing, as I said above I‘m building a real-time analyzer, visualizing the magnitude of the fft
OK but you’ll still have to segment the incoming audio before you take the FFT anyway like I described.