Should I use Array or Audiobuffer for Wavetable synthesis?

On the juce wavetable synthesis tutorial, they used an audiobuffer to store the wavetable.
https://docs.juce.com/master/tutorial_wavetable_synth.html
Is there a specific reason to do that? Why not use a JUCE Array class? or even a raw array if you know the wavetable size? Shouldn’t a raw array be the fastest since it doesn’t have all those extra functions?

Also, a related question is if I’m using an array should I specify the minimumAllocatedSize for the array like this? bc I didn’t see them using that in the tutorial I’m guessing setting a minimumAllocatedSize is essentially pointless in terms of performance?
Array<float, juce::DummyCriticalSection, WAVE_SIZE> wave;

Functions only consume CPU if they are called.

First of all, an Array won’t be any more performant compared to an audio buffer. Both use some heap allocated memory under the hood and probably the assembly generated will look nearly the same in the end. As Xenakios said, the bunch of member functions that the AudioBuffer has don’t matter in terms of performance as long as they are not called. On the other hand they might be helpful when manipulating the buffer.

Regarding the minimum allocated size, this only prevents the array from time consuming reallocating to a smaller than the given value in case of frequent resizing. However in the case of a wavetable you won’t need to resize the array frequently.

1 Like

Thank you!