I can’t get the SamplerVoice override class to use the DSP modules correctly.
If I pass this into the AudioProcessor block it works fine but when I try to put a filter per voice I receive this error that I’m not sure how to get past.
void renderNextBlock (AudioSampleBuffer& outputBuffer, int startSample, int numSamples) override
{
SamplerVoice::renderNextBlock(outputBuffer, startSample, numSamples);
dsp::AudioBlock<float> block (outputBuffer);
lowPassFilter.process(dsp::ProcessContextReplacing<float> (block));
return;
}
JUCE Assertion failure in juce_ProcessorDuplicator.h:69
Hmm so now I got it to work there but its not behaving the same as if I were passing it in the processBlock function. Frustrating. Clicks and weird audio behavior.
Where (i.e. on what occasion) are you calling reset()?
It should only be called, if there is a discontinuity in the signal, and I guess, if the signal is not silent at that moment could have nasty effects…
You can’t directly do effects in the JUCE synth voice outputbuffer because the renderNextBlock calls must accumulate into the same buffer. So each of your voices is going to need its own buffer where you process the voice and the effects and then sum the processed sound into the voice output buffer.
Ok. I assumed renderNextBlock is the exclusive voice buffer you can act on. This goes against what has been suggested in other threads unless I am misunderstanding.
“For per-voice effects, you’d just add the effect processing to your voice class’s rendering code.”
Jules just omits mentioning in that post that if the effect code requires manipulating a buffer, the voice output buffer can’t be used. (The output buffer already contains the summed output of the previously rendered synths voices, so you wouldn’t end up having proper per voice effects.)
So, you will either have to do your effects processing without using the shared voice output buffer (if available, use sample by sample processing calls in the DSP) or use a dedicated processing buffer in the voice for the synthesis and the effects. (You might be able to get away with some kind of a shared buffer with that too, since the voices are not rendered in parallel.)
It’s also a possible solution to do your own voice summing that doesn’t use an accumulating shared buffer for the voice outputs.