Filter cutoff staircaseing when using greater buffersizes

I want to implement a Moogstyle Ladder Filter to a synthesizer I’m developing. I’ve found a great resource of various models and when implemented they sound pretty good. For one little problem: all of the filters I’ve tried encounter ‘staircasing’, as I like to call it.

In my code, the cutoff value is updated every sample, but since most of the filters I’ve tested (including Juce’s IIRFilter) process samples per bufferblock, the higher I set the buffersize, the more apparent this ‘staircasing’ becomes. Meaning; since the cutoff is only called at the start of the for-loop that calculates the next bufferblock inside the filter class, the filters resulting sound is only updated every block. Especially for filters, this behaviour is unwanted as one can imagine.

Solutions I’ve tried:

  • using a LinearSmoothedValue for the cutoff value in my synthesizer which is set every sample and called just before processing the next bufferblock by the filter.
  • Using a ‘current’ to ‘targetvalue’ which increments by inc = (target - current) / buffersize; at everysample.

Neither of them have any effect on the actual sound, as expected.

Is there a way to update the cutoff value inside the for-loop of these filters that calculate the next buffer block? Or, how would one tackle this problem?

The give some more insight to my codes’ structure:

In PluginProcessor.cpp the main process function looks like this (delayBuffer is effecitvely unused):

template <typename FloatType>
void JuceDemoPluginAudioProcessor::process (AudioBuffer<FloatType>& buffer,
                                            MidiBuffer& midiMessages,
                                            AudioBuffer<FloatType>& delayBuffer)
{
    const int numSamples = buffer.getNumSamples();
   

    // Now pass any incoming midi messages to our keyboard state object, and let it
    // add messages to the buffer if the user is clicking on the on-screen keys
    keyboardState.processNextMidiBuffer (midiMessages, 0, numSamples, true);

    
    // set various synthesizer voice parameters
    setOscGains(*osc1GainParam, *osc2GainParam);
    
    setAmpEnvelope(*attackParam1, *decayParam1, *sustainParam1, *releaseParam1);
    setPitchEnvelope(*attackParam2, *decayParam2, *sustainParam2, *releaseParam2);
    
    setPitchModulation(*pitchModParam);
	setOsc1DetuneAmount(*osc1DetuneAmountParam, *oscOffsetParam );
	setOsc2DetuneAmount(*osc2DetuneAmountParam, 0);
    
    // and now get our synth to process these midi events and generate its output.
    synth.renderNextBlock (buffer, midiMessages, 0, numSamples);
    
    
    // getting our filter envelope values
    applyEnvelope(buffer,delayBuffer);
    
    // applying our filter
    applyFilter(buffer, delayBuffer);
    
    

    // In case we have more outputs than inputs, we'll clear any output
    // channels that didn't contain input data, (because these aren't
    // guaranteed to be empty - they may contain garbage).
    for (int i = getTotalNumInputChannels(); i < getTotalNumOutputChannels(); ++i)
        buffer.clear (i, 0, numSamples);

    applyGain (buffer, delayBuffer); // apply our gain-change to the outgoing data..

    // Now ask the host for the current time so we can store it to be displayed later...
    updateCurrentTimeInfoFromHost();
}

It calls multiple, templated process functions as a means of refactoring.

applyEnvelope() sets the cutoff, resonance and drive values every sample:

template <typename FloatType>
void JuceDemoPluginAudioProcessor::applyEnvelope (AudioBuffer<FloatType>& buffer, AudioBuffer<FloatType>& delayBuffer)
{
    ignoreUnused(delayBuffer);
    
    filterEnvelope->setAttackRate(*attackParam3);
    filterEnvelope->setDecayRate(*decayParam3);
    filterEnvelope->setReleaseRate(*releaseParam3);
    filterEnvelope->setSustainLevel(*sustainParam3);
    
    const int numSamples = buffer.getNumSamples();
    
    for (int i = 0; i < numSamples; i++)
    {
        float range = *filterContourParam * contourMultiplier; //contourMultiplier is a knob that controls the amount of the influence of the envelope to the filter parameters
        currentCutoff = *filterParam + (filterEnvelope->process() * range); //the filterEnvelope class returns the amplitude every sample
        
        //clamp the cutoff frequency
        if (currentCutoff > 12000.0)
            currentCutoff = 12000.0;
        else if (currentCutoff < 0.0)
            currentCutoff = 0.0;
        
        
        //set the filters' parameters
        for (int i = 0; i < 2; i++) {
            
            filter2[i].SetCutoff(currentCutoff);
            filter2[i].SetResonance(*filterQParam);
            filter2[i].SetDrive(*filterDriveParam);
        }
    }
}

applyFilter() aplies the filter to the current buffer using set parameters

template <typename FloatType>
void JuceDemoPluginAudioProcessor::applyFilter (AudioBuffer<FloatType>& buffer, AudioBuffer<FloatType>& delayBuffer)
{
    ignoreUnused (delayBuffer);
    
    const int numSamples = buffer.getNumSamples();
    
    float* channelDataLeft = (float*) buffer.getWritePointer(0);
    float* channelDataRight = (float*) buffer.getWritePointer(1);
    
    filter2[0].Process(channelDataLeft, numSamples);
    filter2[1].Process(channelDataRight, numSamples);
    
    
    // since the drive parameter generates a lot of volume, we reduce the output
    if (*filterDriveParam > 1.0)
    {
        FloatType reduction = sqrt(1 / *filterDriveParam);
        for (int i = 0; i < numSamples; i++)
        {
            channelDataLeft[i]  *= reduction;
            channelDataRight[i] *= reduction;
        }
    }
}

Hi I know this was a while ago but did you find a solution to this. Having the same problem.

just a shot in the dark, change the frequency logarithmically not linear

Well, I managed to circumvent this issue by splitting the buffer into smaller chunks and then process these chunks:

template <typename FloatType>
void MonosynthPluginAudioProcessor::applyFilter (AudioBuffer<FloatType>& buffer, LadderFilterBase* filter)
{
    
    FloatType* channelDataLeft  = buffer.getWritePointer(0);
    
    const int numSamples = buffer.getNumSamples();

    
    //
    //  break buffer into chunks
    //
    int stepSize = jmin(16, numSamples);
    
    int samplesLeftOver = numSamples;
    
    
    for (int step = 0; step < numSamples; step += stepSize)
    {
        
        FloatType combinedCutoff = currentCutoff + smoothing[0]->processSmooth( cutoff.getNextValue() ) ;

		if (combinedCutoff > CUTOFF_MAX) combinedCutoff = CUTOFF_MAX;
		if (combinedCutoff < CUTOFF_MIN) combinedCutoff = CUTOFF_MIN;
        
        auto snapToLocalVal= [](double val) -> double { if (val < 0.0) val = 0.0; else if (val > 1.0) val = 1.0; return val;  };

        FloatType newReso =  snapToLocalVal(resonance.getNextValue());

        filter->SetResonance(newReso);
        filter->SetDrive(drive.getNextValue());
        

        if (samplesLeftOver < stepSize)
            stepSize = samplesLeftOver;
        

		if (prevCutoff == combinedCutoff)
		{
			filter->SetCutoff(combinedCutoff);

            if (filter->SetCutoff(combinedCutoff))
                filter->Process(channelDataLeft, stepSize);
		}
		else
		{
			filter->ProcessRamp(channelDataLeft, stepSize, prevCutoff, combinedCutoff);
		}
        
        prevCutoff = combinedCutoff;
        samplesLeftOver -= stepSize;
        channelDataLeft += stepSize;
    }
    
    FloatType* dataLeftPass2 = buffer.getWritePointer(0);
    FloatType* dataRightPass2 = buffer.getWritePointer(1);
    
    for (int i = 0; i < numSamples; i++)
    {
        dataRightPass2[i] = dataLeftPass2[i];
    }
    
}

I also put a lowpass filter on the cutoff as well, for further smoothing and made a seperate method in my filters that lerps the cutoff between a previous value and the new value:

 virtual void ProcessRamp(double* samples, size_t n, double beginCutoff, double endCutoff) override
 {
     const auto increment = (endCutoff - beginCutoff) / static_cast<double> (n);
        
     for (uint32_t i = 0; i < n; i++)
     {
         SetCutoff(beginCutoff);
         samples[i] = doFilter(samples[i]);
         beginCutoff += increment;
     }
 }

That doesn’t reduce any staircasing. Using a logarithmic scale is to simply make a cutoffslider respond appropiately.

If use have several values from a slider with a logarithmic scale, and interpolate between this values linear, of course this can be the reason of stair-casing, (might not the reason of the first post)

example, values from slider over time

10, 100, 1000

and now interpolate linear between the value

10, 40, 70, 100 , 400, 700, 1000

you see the problem is obvious, sometimes the value is double than previous, sometimes only a bit.

this is how a log. interpolation would look like

10 , 21 , 64, 100 , 210 , 640 , 1000

That might be worth considering when using larger buffers, but I use a ‘chunksize’ of 16 samples maximum and oversample at a minimum of 96k/sec. At that point, it doesn’t really matter of you simply linearly interpolate.