Controlling IIRFilters with a slider

My next noob question… I have an instance of IIRFilter, and sliders to control its parameter values (e.g. frequency and gain). This seems to work OK if I set IIRCoefficients inside processBlock like so:

double sampleRate = getSampleRate();
my_coeffs = IIRCoefficients::makeHighShelf(sampleRate, my_freq, 1, my_gain);
my_filter.setCoefficients(my_coeffs);
my_filter.processSamples(outR, blockSize);

In the example above my_freq and my_gain are values set using UI sliders.

However I have read that IIRCoefficients should be defined only once in prepareToPlay rather than repeatedly in processBlock. Is that true? And if so, how would one change the coefficient parameters in realtime in response to slider feedback, since presumably that needs to happen in processBlock?

You’re right, it needs to happen in real-time response so yes, you should recalculate your coefficients in the process block (at least). Even then, you may experience zipper-noise if you’re automating the parameters for the filter, since you technically have to recalculate the coefficients every sample, and not every block.

Thanks for the reply @Mayae. I haven’t experienced zipper noise yet, but to avoid that possibility do you think it would be sufficient to recalculate the coefficients for each sample like so inside processBlock:

int blockSize = buffer.getNumSamples();
for ( int i = 0; i < blockSize; i++ )
{
    // Recalculate coefficient and apply filter here
}

Are there any performance drawbacks to doing this, or a better method that I am unaware of?

Just handle the Slider changes:

void CFourBandEQEffectProcessor::setFilter_3_Gain (double dGain)
{
    m_dFilter_3_Gain = dGain;

    updateCoefficients();
}

void CFourBandEQEffectProcessor::setFilter_3_CenterFreq (double dCenterFreq)
{
    m_dFilter_3_CenterFreq = dCenterFreq;

    updateCoefficients();
}

and have a method to update your coefficients:

void CFourBandEQEffectProcessor::updateCoefficients()
{
    m_Filter_1_Coefficients = IIRCoefficients::makePeakFilter (m_dSampleRate, m_dFilter_1_CenterFreq, m_dFilter_1_Bandwidth, (float) m_dFilter_1_Gain);
    m_Filter_2_Coefficients = IIRCoefficients::makePeakFilter (m_dSampleRate, m_dFilter_2_CenterFreq, m_dFilter_2_Bandwidth, (float) m_dFilter_2_Gain);
    m_Filter_3_Coefficients = IIRCoefficients::makePeakFilter (m_dSampleRate, m_dFilter_3_CenterFreq, m_dFilter_3_Bandwidth, (float) m_dFilter_3_Gain);
    m_Filter_4_Coefficients = IIRCoefficients::makePeakFilter (m_dSampleRate, m_dFilter_4_CenterFreq, m_dFilter_4_Bandwidth, (float) m_dFilter_4_Gain);

    m_Filter_1_Left.setCoefficients (m_Filter_1_Coefficients);
        m_Filter_1_Right.setCoefficients (m_Filter_1_Coefficients);

        m_Filter_2_Left.setCoefficients (m_Filter_2_Coefficients);
        m_Filter_2_Right.setCoefficients (m_Filter_2_Coefficients);

        m_Filter_3_Left.setCoefficients (m_Filter_3_Coefficients);
        m_Filter_3_Right.setCoefficients (m_Filter_3_Coefficients);

        m_Filter_4_Left.setCoefficients (m_Filter_4_Coefficients);
        m_Filter_4_Right.setCoefficients (m_Filter_4_Coefficients);
}

Your process block just processes the filters:

void CFourBandEQEffectProcessor::processBlock (juce::AudioSampleBuffer& buffer, juce::MidiBuffer& /* midiMessages */)
{
    if (! m_bPaused)
        {
        if (! isBypassed())
            {
            const int numSamples = buffer.getNumSamples();

            float *dataLeft = buffer.getWritePointer (0);
                float *dataRight = buffer.getWritePointer (1);

                m_Filter_1_Left.processSamples (dataLeft, numSamples);
                m_Filter_1_Right.processSamples (dataRight, numSamples);

                m_Filter_2_Left.processSamples (dataLeft, numSamples);
                m_Filter_2_Right.processSamples (dataRight, numSamples);

                m_Filter_3_Left.processSamples (dataLeft, numSamples);
                m_Filter_3_Right.processSamples (dataRight, numSamples);

                m_Filter_4_Left.processSamples (dataLeft, numSamples);
                m_Filter_4_Right.processSamples (dataRight, numSamples);
                }
            }
}

I haven’t had any issues with zipper noise… or responsiveness since the buffer size is small enough not to be an issue. but I’m not automating my params.

Rail

1 Like

@Rail_Jon_Rogut That approach works well - thanks so much! I think it is unlikely that the user will want to automate my plugin’s filter parameters, so I won’t worry about zipper noise at this point.

I’m sorry, I might be responsible for that misunderstanding:
The IIRCoefficients are a very small class consisting of about three values i think, so it is no problem to create them on the stack in processBlock. It is only problematic to create things on heap memory, which can take different ammounts of time depending of capacity of the machine. The resize of the Array would take place on the heap, so it should be avoided. What definitely has to be avoided in processBlock is allocating buffers etc.

A note to @Rail_Jon_Rogut’s solution: the updateCoefficients is a result of a GUI action, i.e. from the message thread. The processBlock works parallel, so it can read the coefficients for filtering while they are being written by updateCoefficients, which leads to undefined behaviour.
In practice it might be no problem, as long as the coefficients are changed slowly (I am no DSP guru…), but you can improve that by adding a lock (ScopedWriteLock) to harmonise:

void CFourBandEQEffectProcessor::updateCoefficients()
{
    const ScopedWriteLock myScopedLock (getCallbackLock());
    // myLock is now locked
    // ...

The processBlock is automatically locked with the CriticalSection retrieved by AudioProcessor::getCallbackLock()

Hope that helps…

IIRFilter::setCoefficients() does a lock for you

Rail

I didn’t know that, thanks Rail!

EDIT: I read a little in the sources, the processLock the filter uses guarantees that setting all coefficients for the filter is an atomic operation. But it is not synchronised with the critical section of the audio thread calling processBlock.
I don’t know if this is a problem in practise, but I think the lock in updateCoefficients using the criticalSection of the audio thread is no bad idea… but I am happy to learn if I’m wrong…

Some people argue locking in the audio thread isn’t a problem, but as we all know, audio waits for nothing. Real-time music dsp’s strength is, well, being real-time. For prototyping and private projects it obviously isn’t a problem, but I would never do something like this in production code. I’ve seen audio stutters live on stage, and personally have experienced it myself live as well, due to a single plugin taking ‘liberties’.

[quote=“daniel, post:8, topic:20405”]
EDIT: I read a little in the sources, the processLock the filter uses guarantees that setting all coefficients for the filter is an atomic operation. But it is not synchronised with the critical section of the audio thread calling processBlock.I don’t know if this is a problem in practise, but I think the lock in updateCoefficients using the criticalSection of the audio thread is no bad idea… but I am happy to learn if I’m wrong…
[/quote]It is synchronized since the iir filter object acquires the same lock inside its own processing method.


You can easily swap a complex data structure atomically and without any associated cost perfectly safe and standard-compliant:

[quote=“jnicol, post:3, topic:20405”]
Thanks for the reply @Mayae. I haven’t experienced zipper noise yet, but to avoid that possibility do you think it would be sufficient to recalculate the coefficients for each sample like so inside processBlock:

int blockSize = buffer.getNumSamples();
for ( int i = 0; i < blockSize; i++ )
{
// Recalculate coefficient and apply filter here
}

Are there any performance drawbacks to doing this, or a better method that I am unaware of?
[/quote]Yes, recalculating filter coefficients is not cheap compared to most other operations. To be pedantic, the problem with your approach is that it isn’t invariant of the host’s block size (ie. you get more zipper noise the higher the hardware latency is).

Most people implement some form of linear interpolation, while being vastly better, can still introduce varying amount of smoothing / discontinuities. Personally, for mastering and mixing purposes, I’ve found the only thing working well while also always sounding/producing the same output and / or approximating an automation signal to a convergent degree, is to integrate the control signals using some higher order lowpass filter and update the filter coefficients each sample.

So, it’s a tradeoff between processing time and determinism and quality - and remember, one you always can change at a later stage. If a plugin doesn’t support automation on filters, it would be extremely overkill to implement all of this, for instance.

Now I got it, thanks… sometimes it takes a little longer… :wink:

Wow, what great information guys! Thanks for the detailed replies. Interesting stuff about locking the audio thread.

Since this is meant to be a simple hobby project I won’t slow myself down by prematurely optimizing my plugin, but it is good to know about the different approaches that can be used for smooth filter automation.

Don’t worry, it wasn’t you who gave me the idea that coefficients should only be defined once - I picked that up from a KVR thread.

Does someone have a working IIRFilter that they would be willing to let me take a look at? Trying to get my own working, and I’m so stuck. So far haven’t been able to get help here on the forums or on the IRC channel… would be extremely appreciated.

any luck???

Buy a copy of Will Pirkle’s “Developing Audio Effect Plugins in C++.” Cheap for a textbook and it has plenty of code examples including IIR filters.

My code for simple IIR’s looks like this (T-DF2 structure)

class Biquad
{
public:
    Biquad()
    : z0 (0.0), z1(0.0), a0(0.0),a1(0.0),a2(0.0), b1(0.0), b2(0.0) 
    {};
    ~Biquad() {};

    inline void ProcessSample (const float &x, const float &y)
    {
        y = a0 * x + q0;
        z0 = a1*x + b1*y + z1;
        z1 = a2*x + b2*y;
    }

    float z0, z1, a0, a1, a2, b0, b1;
};

For coefficient calculation check out the Audio EQ cookbook from musicdsp, or Will Pirkle’s book. I may have flipped the a/b names for coefficients, but that’s because I do it the right way (let’s start a flame war).

An arbitrary order IIR can be constructed from serial biquads. That said there are hazards in computing the coefficients if only the transfer function is known, if you know the poles/zeroes then you’re good.

The T-DF2 structure (transposed direct form 2) is numerically sound for fixed and floating point signals, and has fewer multiplies/adds than DF1, however the trade-off is in the possible pole-zero locations which gets magnified with floating point variables. This pops up with high order and low Q filters and some weird shelving filters, but not for your typical audio filters. If you’re interested, Zölzer’s text “Audio Signal Processing” goes into detail.

I ended up doing it pretty much exactly as described in Rail_Jon_Rogut’s post earlier in this thread: When a slider is moved it calls a custom method in my plugin processor called updateCoefficients which creates new IIRCoefficients for each filter. Inside processBlock each filter is applied in the usual fashion.

Rail_Jon_Rogut’s example code shows everything except creating sliders, which is covered by this Juce tutorial: https://www.juce.com/doc/tutorial_slider_values.

I second the recommendation for Will Pirkle’s book. It the best plugin programming resource I have come across.