Reducing Zipper Noise in Volume and Pan Changes

Problem: Clicks and pops while moving a fader controlling the volume on a track playing back a sine wave. I have a solution that greatly reduces the problem, but the clicks can still be produced if the fader is moved fast enough.
Here’s the pre-solution setup: The faders are implemented as Sliders, the (stereo) track is derived from PositionableAudioSource, and the internal pan and volume members are protected by a CriticalSection. Here’s a snipped from getNextAudioBlock that renders volume and pan:

float gain = getGain();
float pan = getPan();
float left = 0.5f * gain * (1 - pan);
float right = 0.5f * gain * (1 + pan);

bufferToFill.buffer->applyGain(0,bufferToFill.startSample, bufferToFill.numSamples, left);
bufferToFill.buffer->applyGain(1,bufferToFill.startSample, bufferToFill.numSamples, right);

My solution uses static floats to hold the previous setting and uses applyGainRamp to move from that setting to the new setting. The static float is a quick hack to test it out–although it allows the code to get the initial value of a control at start up, it doesn’t account for the case where the transport is stopped and the control moved. A better solution would replace the statics with variables that get updated if a control moves when the transport is off. Here’s the solution:

[code]
float left;
float right;
float gain = getGain();
float pan = getPan();

left = 0.5f * gain * (1 - pan);
right = 0.5f * gain * (1 + pan);

static float prevLeft = left;
static float prevRight = right;

bufferToFill.buffer->applyGainRamp(0, bufferToFill.startSample, bufferToFill.numSamples, prevLeft, left);
bufferToFill.buffer->applyGainRamp(1, bufferToFill.startSample, bufferToFill.numSamples, prevRight, right);

prevLeft = left;
prevRight = right;[/code]

I could probably improve it by using a high-order smoother, but I’m reluctant to wade through the math and experimentation just to find that performance sucks. As the defect was uncovered while trying to implement mix automation, it might make sense to filter the automation data after capture. My solution is good enough for most manual fader use–I didn’t even notice the problem until I started testing automation and then only on certain source material.

Any comments or ideas are welcome. I’m particularly interested in other folk’s solutions to what surely must be a common problem.

A filter something like this:

float coeff = 0.9f;
float newGain = getGain();

while(numSamples--)
{
   // currentGain is a class member initialised to zero
   currentGain = newGain + coeff * (currentGain - newGain);

  //...
  // ... apply gain from input samples to output samples
}

With coeff == 0 there’s no smoothing, closer to 1 gives you a longer lag time. You can convert a lag time from seconds (to get within 0.001 of the value i.e., 60dB) to the coeff like this

1 Like

I have the same problem here, using the filter works but its a load on the CPU when doing it in process block.
Is there another way to that? Can i find anything in the examples?

Have you checked out the LinearSmoothedValue class?

https://bill-auger.github.io/JUCE/doxygen/doc/classLinearSmoothedValue.html

Rail

I was just reading about that :smiley:
I couldn’t find any example of using it though.
I’m a bit new so can you show me how to set it up please (defining, calling, etc)?

Hmm…

In my processor class I have a private member:

LinearSmoothedValue<double> m_dVolume;

and a public method:

void    setVolume (double newVolume);

In the constructor of the class I set the default sample rate:

setSampleRate (44100.0);

and I have:

void CVolumeAndMuteProcessor::setSampleRate (const double dSampleRate)
{
    const double dSmoothTime = 0.0001;

    m_dVolume.reset (dSampleRate, dSmoothTime);
}

void CVolumeAndMuteProcessor::prepareToPlay (double dSampleRate, int estimatedSamplesPerBlock)
{
    setSampleRate (dSampleRate);

    BaseProcessor::prepareToPlay (dSampleRate, estimatedSamplesPerBlock);
}

void CVolumeAndMuteProcessor::setVolume (const double newVolume)
{
    m_dVolume.setValue (juce::jlimit (0.0, getMaximum(), newVolume));

    updateHostDisplay();
}

double CVolumeAndMuteProcessor::getVolume()
{
    return m_dVolume.getNextValue();
}

void CVolumeAndMuteProcessor::processBlock (juce::AudioSampleBuffer& buffer, juce::MidiBuffer& /* midiMessages */)
{
    :

    if (! isBypassed())
        {
        const float localVolume = (float) m_dVolume.getNextValue();
    
        buffer.applyGain (0, buffer.getNumSamples(), localVolume);
        }
}

Rail

1 Like

Thanks a lot! That’s exactly what i was looking for. Saved me a lot of time figuring it out by myself. I appreciate the help Rail.

The linear smoothed version is another good solution and probably faster in many circumstances. Although I’m surprised that the lag filter was so slow. I’m curious if you were calculating the coefficient every sample, or something like that. Calling std:log() and std:exp() every sample is going to be slow.

1 Like

I wasn’t calculating the Coef for each sample, but there was other recalculations when the variable were changing, i had to do more work to get around it.
But with LinearSmoothedValue its much easier and faster. I can use IsSmoothing flag to check if its changing or not and recalculate accordingly. :slight_smile: