How to implement a filter bank?

Hello everyone, I hope you are well!
I am currently developing a new audio plugin of the “tracking space” type, with which I am looking to attenuate certain frequencies with respect to an input signal; however, for this it is necessary to implement a 1/3 octave band filter bank (32 filters in total approximately), in order to analyze each 1/3 band and thus obtain the energy or level for each band.

I have been thinking of implementing the 32 filters in parallel with multiple threads, however, I don’t know if this is the best or most optimal way to do it. I would appreciate any help from you on how to handle this problem.

Thank you very much in advance for everything!

Additional threads are most likely not going to help anything. You should just try implementing it with the simplest code possible in a single thread and see how the CPU usage is like. If it’s too much, then you could look into using SIMD code for the filters, so you could perhaps calculate 4 or 8 filters at once. 32 (simple) filters should be a piece of cake to run on modern processors even without SIMD code, though. (Depending of course on how exactly the filter is implemented.)

1 Like

There are more efficient ways to do it than actually using 32 filters.
As a simple example let’s say you had two bands: you could have a low-pass for the low band and subtract its result from the dry signal to get the high band, replacing a filter with a simple subtraction.

Thank you for your response!
Do you think implementing it as you say might add too much latency? While calculating all the subtractions

Thank you for your response!
Do you think implementing it as you say might add too much latency? While calculating all the subtractions

I think you have a misconception about introducing latency here.

The way audio plugins work is that the hosts hands them blocks of samples to the process function and expects the plugin to process the samples in this block. When playing back audio in real time, depending on the block size and sample rate, you have a certain time slot to perform these calculations. If your code takes too long to perform the computations here, you won’t introduce latency but dropouts, as the sound driver cannot access the samples expected to be processed by your plugin in time. On a modern CPU, a few filters and a few subtractions are not likely to hit this deadline at all. It’s still a good thing to try to keep processing load as low as possible since in a real world scenario there might be a lot of plugin instances running, all taking up resources from that time slot, but in this particular case I’d suggest you to not bother too much.

By the way, multithreading, as suggested in the first post usually makes things much worse and complicated in a lot of cases by the way, so this should only be a last resort if you have to find out that your algorithm won’t perform as expected at all.

That being said, audio plugins can actually introduce latency. This happens if the dsp algorithms deliberately buffer up samples to either process them as a block (think of FFT based stuff) or to have access to a certain portion of the signal history (think of compressors with lookahead) or if they use FIR filters or resampling algorithms that introduce a certain latency. In all these cases however, the expected latency can be computed if you know the algorithms and it should be reported to the plugin host which will attempt to compensate for it when mixing together channels with plugin chains that introduce different latencies.

1 Like