I’m starting my next project, which I intend to write buffer based and with SSE/AVX extentions to maximize performance.
A logical problem I encountered is in the modulation: Let’s say we have a synth with a modmatrix, where there is FM set up from Osc2 to Osc1.
Since the Osc2 buffer gets filled after the buffer for Osc1, the modulation will at best be an entire buffer behind. Sample based processing doesn’t have this problem, since I can use the sample that was rendered last.
A solution I could think of is dividing everything into “modulators” and “audio”, where the modulators are rendered first and only modulation from modulators to audio is allowed. But this would disable FM, LFOs modulating themselves and so on…
This question was already asked here but never answered.