Ok. I assumed renderNextBlock is the exclusive voice buffer you can act on. This goes against what has been suggested in other threads unless I am misunderstanding.
“For per-voice effects, you’d just add the effect processing to your voice class’s rendering code.”
https://forum.juce.com/t/sampler-help/16040/4
Should I iterate the voices in the main audio processor then?
Do you know any examples I can look at?
