Buffer Based Processing and Sample Accurate Automation

I’m starting my next project, which I intend to write buffer based and with SSE/AVX extentions to maximize performance.

A logical problem I encountered is in the modulation: Let’s say we have a synth with a modmatrix, where there is FM set up from Osc2 to Osc1.

Since the Osc2 buffer gets filled after the buffer for Osc1, the modulation will at best be an entire buffer behind. Sample based processing doesn’t have this problem, since I can use the sample that was rendered last.

A solution I could think of is dividing everything into “modulators” and “audio”, where the modulators are rendered first and only modulation from modulators to audio is allowed. But this would disable FM, LFOs modulating themselves and so on…

This question was already asked here but never answered.

We did this with sample based processing for our synths. I don’t see any other solution for cross modulation.
I also think the code will be much better maintainable for synths with sample based processing. You maybe use vectors for stereo processing or parallel filters in this case.

Thanks for your answer!

I had the question on my mind for the last couple of days and couldn’t think of a solution either. Another workaround though, could be to render one vector of SSE/AVX samples and then the next module. I’m not certain if the speedup would still be as large, since the memory alignment is not as nice.

But maybe I’m in love with the idea of writing vectorized code and should look at the bigger picture instead? :slight_smile:

The same here, but don’t forget to profile. Most time the compiler does it’s job very well and it’s not worth to add the complexity of vector code.
Also keep in mind that apple won’t support SSE and AVX anymore in the future. Make sure you are using the JUCE abstractions or your own for those operations.