Multichannel convolution filter

Hi, does JUCE’s Convolution class support multichannel filters? For example can I convolve an 8 channel input with an 8 channel filter, i.e., one unique filter for each input channel? Thanks!

No, the Convolution class only supports stereo and mono processing. It might be possible to run multiple instances in parallel for multichannel applications though.

1 Like

Thanks Reuk. How would you create multiple instances of the convolution?

As an example, you could have a std::vector<std::unique_ptr<Convolution>> with a size that matches the number of channels that you need to process. You’d then set up each convolution with a different mono impulse response which you could extract from a multichannel impulse response file. Finally, during processing, you’d send each channel of input to the corresponding convolution instance in the vector.

1 Like

Thanks Reuk! I can see how that would work. I have implemented something that does two channels like that. I’ll work on making it work with more channels.

I was looking at the JUCE source code and it looks like there might be some code that could possibly be extended to support a multichannel use case. What is the reason it is currently limited to two channels?

Also in my use case I want to process each input channel with multiple impulse responses for microphone array processing and sum those outputs together. Is there an efficient way to do that using the convolution class so you don’t have to compute the FFT of the input channel multiple times?

Thanks!

No, the Convolution doesn’t currently contain a facility like this.

1 Like

Hi Reuk, I followed your advice and I created std::vector<std::unique_ptr<Convolution>> of size 2304. I have 64 input channels and 36 output channels but I need to do 2304 convolutions since I’m processing the signals from a 5th order Ambisonic microphone. When I had that many Convolver objects it created a message queue for each one which was very slow so I have a single shared message queue. I don’t update the impulse responses after first loading them. How big should I set the message queue?

This mostly worked but I found that there was stuttering so I decided to create 36 threads to process the convolutions. However, now I’m getting a memory access error. Each thread reads all input channels and writes to its own channel in a separate AudioBuffer. Is there an issue in using one message queue with many threads? Or is there some other issue?

Many thanks!

Sounds like the em64 :slight_smile:

With so many channels (in and out) it’s better to write your own convolution engine, which makes use of summing up in frequency domain before transforming back to time domain for the outputs, and also shares the forward transform of the input signals. When looking only at the input it’s 64 forward transforms vs. 2304, which is a huge difference! For the output it’s 36 vs 2304, even a bigger reduction.

As a first prototype I suppose it’s okay to do a quite naive fast convolution without partitioning, as I guess your impulse response is fairly short.

There are some open-source matrix convolvers out there (e.g. mcfx_convolver or the x-mcfx update, also the SPARTA suite has one) to get inspired.

1 Like