So Ive been having some issues with zippering artifacts with my IIR filters. To preface, I have 2 low shelf and 2 high shelf filters which get adjusted at different parts in my processorChain, however none of the filters ever move their frequency just the gain.
These filters realistically won’t change too often, but when my tone slider moves I get some pretty bad zippering unless the slider is moved pretty slowly. It is my understanding that dsp::stateVariableFilter would probably be better to use, however that filter doesn’t support shelves.
I have been trying to get a SmoothValue hooked up to these filters gain parameter for a while now, but since I am using a ProcessorChain, and not iterating over the buffer manually, Im not sure how I would appropriately change the filter parameters for each new smoothed value. I can update the parameters each processBlock call, but this is only fast enough that my smoothValue ramp speed must be at 1.5 seconds to remove the zippering (far too slow).
To summarize, my question is, how can I use a smoothValue to adjust IIR filter coefficients that live inside a processorChain? And if this is not possible, what other routes could I take?
It’s a big question which has lots ramifications. The IIR filters are in particular heavy as they call new when you set their cutoff & gain.
The basic principle though is to free yourself from processing at the same block size as the host, or your plugin will sound different depending on the host block size.
Once you have some system to do that, you should be able to tune the parameters to update as often as you want – and you can tune it to balance the audio quality vs CPU.
What you’re seeking is the concept of a fixed control rate.
The basic principle though is to free yourself from processing at the same block size as the host, or your plugin will sound different depending on the host block size.
This is an interesting point, I never thought about, but it totally makes sense. Do you maybe have any more insight / resources of how it would change in sound with different block sizes?
It’s basically an aliasing issue. If you have fast modulation and a large block size, and you’re processing per blocks, your control signals can start to alias as they’re not being sampled fast enough for the frequency they’re running at.
In the case of simple knobs, it’s still an issue because when you consider how large block sizes can get. If you’re only updating your filter coefficients every 2048 samples because the block size of the host told you to, you’re gonna have to run with some pretty slow smoothing so that you don’t hop too far in your coefficients between blocks. Now you’ve optimized for worst case, so you’ve dropped your sound quality for the people running 128 sized blocks.
Thanks for the reply. However it seems like to get something like this working would realistically not be worth it just to reduce some zippering for one knob ! Is a fixed control rate commonly used though?
It’s a great question, and one that I’ve been thinking about when considering trying use processorChain in plugins. Once you call processorChain.process, the block of audio data gets processed all in one go, and there’s not an opportunity from outside the chain to change control values.
I’m not sure the best way around it, but smoothing the control values from inside the chain was one idea I had. For instance, maybe a templated wrapper around the juce::dsp classes, that combined the dsp classes with a SmoothedValue. The ProcessorChain could still call the process method, but internally, the wrapper would go sample-by-sample, calling SmoothedValue::getNextValue, recalculating dsp coefficients if necessary, and then calling the processSample method of the dsp object.
But at what point is that just super convoluted, rather than handling the parameter smoothing sample-by-sample in your main processor code, and calling those processSample methods yourself… begging the question, when is it worth it to use processorChain?
i personally just apply a simple 1pole lp filter at the parameter value of choice and if that’s not sufficient stack 2 or 3 of them together as that smothens the edges a bit
no. sry if i was unclear. it works like this:
imagine your parameter as a signal (white line). you can do that because each block you get a new parameter value, so you could imagine this being put into a buffer and it would be some sort of steppy signal because for each block it would just keep a value and next block it might be somewhere completely different, just as i have drawn it here. now you apply a lowpass filter on that signal (green) and it will reduce the impact of the discontinuity and definitely make a smooth transition into the new value. only issue with this are that the beginning ot the parameter change might still be a little edgy. but you can reduce this effect by increasing the filter order. (blue) another problem with this approach is that your parameter value will only approach its destination but won’t ever reach it 100%. at the moment i’m experimenting with an idea against that, but i’m not sure yet if i can recommend it. it works so far and it works like this: if the parameter approached its destination so that their difference is smaller than 1.5*(the coefficient that moves the value forward) i just push it to that value without the filter. as that is a very small step i didn’t notice any artefacts with this method yet
@Mrugalla
If I understand this correctly this is relevant when working per sample and would not help when processing per buffer since the value change will be big regardless of the smooth slope?
@refusesoftware
So there’s really no simple solution to perform smoothing on the buffer-processing based classes, like the dsp-modules? I gather they are intended for fixed processing mainly? Your solution sounds nice but seems complicated.
@Wiznat
Did you find any solution to the zippering? I still can’t see how any fixed control rate can be introduced in per buffer processing wihtout advanced tecniques which in a way oppose the whole idea of working with pre-designed dsp-classes.
It’s been a little while since I encountered this problem so I don’t quite remember the solution I came up with, but looking in the project file it looks like I added a smoothValue that would only get updated once per processBlock call instead of per sample, and then updating the filter before processing once per processblock call.
I do know the smoothValue steps were really small for the changes, and consequently the gain transition time was pretty long, like almost a full second. For my use case this wasn’t really an issue, but if you need some really fast moving filter or something I probably don’t have the answer.
Right?? This is one of my gripes learning Juce right now. There are lots of examples and infrastructure for making it very easy…to do something that isn’t all that useful for a real product. I get some processed audio coming out with ease but none of what I wrote was useful.
I would much rather see a few examples that are stripped down nuggets of advanced - and actually useful - ways of doing things. Otherwise, I’m having to dive into the core of all of the convenience functions to figure out how the lower level stuff works, so I can pull that back up and piece together my own functionality. At that point, all of the convenience stuff obfuscated what I actually needed.
My recommendation is if you’re just getting started, always opt for sample by sample processing code if you’re doing any sort of remotely complicated stuff.
Then when you have that working you can see if a) it’s even possible to do it on a block level & b) if there’s actually a performance benefit to doing so.
However that’s not at all the same advice everyone would give!