So I started all my audio coding with the FX and Synthbooks by Will Pirkle. When poking around the interwebs, I found that some experienced coders were dissatisfied with the code and concepts presented within. One particular point was the omition of audio buffers (i.e. everithing is done one sample at a time).
So how is the workflow and what are the benefits of using buffers?
I’ll just type what I think it is. Please correct me if I am wrong:
I create a whole buffer of samples in my oscs, pass the entire buffer to my filters, then to the next component and so on. The benefit seems to be that data fetching from RAM is heavily reduced a variable is reused multiple times.
Is there more to it? Are there unforseen problems arising here? (steppy controls and such…)
Appreciante any input and nudges towards more literature.