# [DSP module discussion] IIR::Filter and StateVariableFilter

In this thread : [DSP module discussion] Structure of audio plug-ins API, someone asked :

What’s the advantage of DF1/2 over TPT?

Let me summarize again what TPT and DF1/2 means :

• A digital filter can be implemented using a DF1/DF2 structure and their transposed versions when it is defined by its Z transfer functions, meaning a set of coefficients. For example, classic biquads have a transfer function like H(z) = (b0 + b1 z^-1 + b2 z^-2) / (1 + a1 z^-1 + a2 z^-2). Knowing these coefficients (b0, b1, b2, a1, a2), the filtering can be done with the DF structures, and that’s how things work in JUCE IIRFilter and dsp::IIR::filter.

• TPT means “topology preserving structure transform”. Basically, it means that the filter is generated from an analog prototype, derived from an analog circuit equations (Kirchhoff’s laws), used directly (see DK-method or Andy Simper’s papers) or reworded as a graphical block scheme (see Vadim Zavalishin’s paper and Will Pirkle’s papers). What we do there, instead of finding an equivalent discrete transfer functions with a set coefficients like before, is that we try to keep the topology, the processing path in analog if you want, to preserve its time-varying context properties mainly. To do so, we need to discretize the integrators only, for example using the bilinear transform and the TDF2 locally, but everything around is the same. That’s why there is also an additional step to get the final processing structure, which is solving the delay free loops, which is easy if everything is kept linear.

Very often, the filters designed with a DF structure only are generated using a filter design method which sets directly the values of the filter discrete transfer function coefficients. Classic EQ biquads are made this way, using the so called bilinear transform. My new IIR filter design classes calculates digital allpass coefficients or manipulate poles and zeroes in the digital domain, so everything is made directly in the digital domain as well. Think also about regressive methods for designing filters, where all the coefficients are fitted to get a given frequency response.

But using the TPT structure means that we can’t do this, it means we only discretize the integrator itself and everything around is left unchanged. So this structure gives less freedom in what we want to do. And if we want to fit a specific wanted frequency response to our processing, we need to do the changes in the analog block scheme instead of the digital world directly.

In short, TPT structure is not the holy grail, it’s just the solution of a very specific problem which is “how do I get the best processing behaviour for a digital filter derived from an analog prototype ?”. For other problems (high order filter design for example), you have to use something easier to use / maintain / customize like standard filters simulated with DF structures (or others but that’s another topic).

One of the confusion sources about all this stuff is the so called State Variable Filter (SVF) circuit, which happens to be the one used to generate the digital coefficients in the RBJ EQ audio cookbook, used in the IIR filter design classes in JUCE. And that circuit is a perfect and the most simple example of how to use the TPT structure, and how to get its advantages (fast modulation of the cutoff frequency). And in TPT SVF code, most of the time the integrators are discretized with the bilinear transform / trapezoidal method, and simulated with a DF structure . But again everything around is different. And at the end the frequency response in time-invariant context is the same.

That’s what you get with the new class dsp::StateVariableFilter.

Bibliography :

5 Likes

Ah - that helps Thank you! Good post!

That’s wrong ! The processing code can be fast in TPT/SVF as well, and actually I think the frequency change code is faster in TPT/SVF than in DF1/TDF2.

What you mean is just that we may change the cutoff frequency more often if we use a TPT/SVF structure since that’s the point

You’re welcome

Again, SVF just means “we use the SVF circuit as a basis”, it doesn’t say anything else, we could have a digital filter process called SVF/DF or SVF/TPT.

1 Like

DF1/TDF2 is not designed for frequency change, so indeed, there may be more computations. But for the same order and for the processing function, TPT is equal or slower than DF1 by definition, as you have to store the state, which is implicit in DF1.
Fot TDF2, you also have a state, so I thing that for ordr 1 or 2, you end up with similar performances, but once you go for higher orders, the state of the SVF doesn’t allow for the simple for loops of TDF2. I might be wrong, I would have to see such an implementation.

1 Like

TPT is equal or slower than DF1 by definition, as you have to store the state, which is implicit in DF1

Well, in DF1 you need to store two states for each amplitude order (last input and last ouput), and only one for TDF2 and TPT !

but once you go for higher orders, the state of the SVF doesn’t allow for the simple for loops of TDF2

What is a high order SVF ???

Nope. You don’t store anything if you do it properly. I don’t store anything. You just have input and output values, no state. You can’t compare the history with the state because for TDF2 and SVF, you are writing and reading the state, which adds stalls in the pipeline.
High order SVF would be order 3, 4… (more than a biquad) where you don’t have a simple update equation like TDF2. Or perhaps I haven’t looked properly and you already have them in JUCE?

Nope. You don’t store anything if you do it properly. I don’t store anything. You just have input and output values, no state.

Well, if you do block processing, you don’t need to store anything during the process of a given block, but at least for the next block you need to retrieve the input and output values from the previous block right ?

High order SVF would be order 3, 4… (more than a biquad) where you don’t have a simple update equation like TDF2.

I still don’t understand what you mean High order SVF means absolutely nothing, because a SVF is a circuit with two capacitances so any equivalent SVF modelled filter is order 2.

If what you mean is “high order digital filter using the TPT structure or any equivalent”, then the update code depends on the circuit, and might very simple (8 cascaded decoupled first order lowpass filters) or very complicated (try the guitar amplifier tonestack circuit, order 3 only but hard to do by hand). But I’m not sure doing it with TDF2 instead might make things a lot easier.

TDF2 is really easy for higher order filters. I mean, that’s what you currently have. And that’s it. Doing so with SVF methodology gets more than hairy.
But once again, let’s compare DF1 (the simple DF1, not the one I’ve implemented, as it has vectorization on top of it) and TDF2. For each output sample, you need to read in_order +1 samples from the input and out_order from the output. You have one write. The nice thing is that for the input samples, you get an amortized access of 1 sample, as it’s only reading. Not so much for the outout, as you need to wait for the write to be committed before you can use it for the next sample.
For TDF2, you copy the states for each sample, so max_order writes and then reads access to the input (can be amortized), the output and the previous state. So you end up with far more writes and double the number of reads. Some may not be noticable, but you will have a hit.

I feel like the terminology is so poorly standardized here, it should be called a “TPT implementation of a State Variable Filter topology” but that doesn’t roll off the tongue. Certainly better than “Zero Delay Feedback” but that’s a flame war better fought on KVR.

I’m wondering, have you analyzed the difference in pole-zero loci for the TPT-SVF vs TDFII or Zolzer topologies?

Also worth mentioning that if you ever intend to extend the DSP module for fixed point support, the SVF implementation is fundamentally a TDF-II-esque structure where the poles are implemented before the zeros, and on fixed point systems you can run into internal overflow issues.

Again that doesn’t mean anything Please use “TPT implementation of a State Variable Filter topology” or any equivalent instead of “SVF methodology”, that’s so wrong !

I’m not sure it would be that easy to do so, since the “pole/zero” concept doesn’t make a lot of sense there but indeed it would be great to see properly how the system acts when a parameter change. I know I saw a few articles over the past years about the stability of the various IIR filter structures (see the last DAFX maybe), and of course there is the (old) thesis of Tim Stilson.

You’re right but I’m not sure it would be useful to have fixed point support in JUCE

Thanks for the reading suggestions! I’ll look into it, there may be some edge cases because of finite precision effects when you start modulating the filter at audio frequency with a high Q filter or one that has begun to oscillate.

You’re right but I’m not sure it would be useful to have fixed point support in JUCE

JUCE on embedded Linux is a thing now right? There’s still some arguments to be made that fixed point is lower power on some processors that might run it…

I imagine it wouldn’t be something easy to add fixed point support in JUCE for the current classes, most of the associated code would have to be rewritten from scratch. It would make sense to have something like a new module called fixed_point_dsp !