Render sampler voice effects using DSP module

I can’t get the SamplerVoice override class to use the DSP modules correctly.

If I pass this into the AudioProcessor block it works fine but when I try to put a filter per voice I receive this error that I’m not sure how to get past.

void renderNextBlock (AudioSampleBuffer& outputBuffer, int startSample, int numSamples) override
    {
        SamplerVoice::renderNextBlock(outputBuffer, startSample, numSamples);
        dsp::AudioBlock<float> block (outputBuffer);
        lowPassFilter.process(dsp::ProcessContextReplacing<float> (block));
        return;
    }

JUCE Assertion failure in juce_ProcessorDuplicator.h:69

The assertation tells you, that your block has more channels than your ProcessorDuplicator has processor instances…

Do you call ProcessorDuplicator::prepare() on your duplicator or the whole chain?

Not sure where I would set that correctly. I declare and initialize my lowpassfilter with the following:

class SynthVoice : public SamplerVoice {
public:
    //==============================================================================
    SynthVoice():lowPassFilter(dsp::IIR::Coefficients <float>::makeLowPass(44100, 20000.0f, 0.1f)) {};
    
    /** Destructor. */
    ~SynthVoice() {};
    
    //==============================================================================
    
    bool canPlaySound(SynthesiserSound* sound) override {
        return dynamic_cast<SynthSound*>(sound) != nullptr;
    }
    
    void renderNextBlock (AudioSampleBuffer& outputBuffer, int startSample, int numSamples) override
    {
        SamplerVoice::renderNextBlock(outputBuffer, startSample, numSamples);
        dsp::AudioBlock<float> block (outputBuffer);
        lowPassFilter.process(dsp::ProcessContextReplacing<float> (block));
        return;
    }
    
    //DSP
    dsp::ProcessorDuplicator<dsp::IIR::Filter <float>, dsp::IIR::Coefficients <float>> lowPassFilter;

private:
    
};

EDIT::

I also have this in my prepare to play function.

dsp::ProcessSpec spec;
    spec.sampleRate = sampleRate;
    spec.maximumBlockSize = samplesPerBlock;
    spec.numChannels = getTotalNumOutputChannels();

Ah, that is great. Can you somehow hand that information to the constructor, so you can call

lowPassFilter.prepare (spec);

in the constructor of your SynthVoice?

I haven’t written a synth yet, so I don’t know. Maybe someone with a better idea can chime in…

Crap I forgot to add that I’m passing spec already there too.

    for (int i = 0; i < mySynth.getNumVoices(); i++) {
        SynthVoice* voxVoice = (SynthVoice*) mySynth.getVoice(i);
        voxVoice->lowPassFilter.prepare(spec);
        voxVoice->lowPassFilter.reset();
    }

And if I comment out reset here it works but I get a nasty click from the filter.

Hmm so now I got it to work there but its not behaving the same as if I were passing it in the processBlock function. Frustrating. Clicks and weird audio behavior.

Where (i.e. on what occasion) are you calling reset()?
It should only be called, if there is a discontinuity in the signal, and I guess, if the signal is not silent at that moment could have nasty effects…

I am only calling it in prepareToPlay and it works fine on an entire block. The artifacts happen when I move it to the voice process block.

You can’t directly do effects in the JUCE synth voice outputbuffer because the renderNextBlock calls must accumulate into the same buffer. So each of your voices is going to need its own buffer where you process the voice and the effects and then sum the processed sound into the voice output buffer.

2 Likes

Ok. I assumed renderNextBlock is the exclusive voice buffer you can act on. This goes against what has been suggested in other threads unless I am misunderstanding.

“For per-voice effects, you’d just add the effect processing to your voice class’s rendering code.”

https://forum.juce.com/t/sampler-help/16040/4

Should I iterate the voices in the main audio processor then?

Do you know any examples I can look at?

Jules just omits mentioning in that post that if the effect code requires manipulating a buffer, the voice output buffer can’t be used. (The output buffer already contains the summed output of the previously rendered synths voices, so you wouldn’t end up having proper per voice effects.)

So, you will either have to do your effects processing without using the shared voice output buffer (if available, use sample by sample processing calls in the DSP) or use a dedicated processing buffer in the voice for the synthesis and the effects. (You might be able to get away with some kind of a shared buffer with that too, since the voices are not rendered in parallel.)

It’s also a possible solution to do your own voice summing that doesn’t use an accumulating shared buffer for the voice outputs.

1 Like

Thank you for the suggestions! Each voice using an accumulated shared buffer was not clear to me.

I’ll take look at how the voices are summed currently and try to alter that path to fit my desired output.