Feedback Delay Network software design considerations


Hi there,

I’m currently in the process of implementing a MIMO feedback delay network in JUCE. As i’m a complete noob to JUCE and also a pretty novice C++ programmer, there are a few upcoming design considerations on which I’d appreciate the input of this community regarding feasible solutions.

So here goes, with the help of one of the demos i have managed to get a multichannel delay working and i also have a pretty hacked together solution for multiplying all the channel data with the feedback matrix. The next step for me would be to make the delay length and gains for each channel separate. If i understand the AudioBuffer class correctly, then it does not support separate gains and buffer lengths for individual channels, so this is where it gets tricky for me.

The way that i would go about this is to go use an array of AudioBuffer instances, that is, give each channel its own AudioBuffer instance. For the delay alone, this seems pretty feasible, however, i will need to find a much more efficient way to multiply all the delayed samples with the feedback matrix.

My current “solution” is to iterate over each sample and insert it into a matrix type from Eigen, then perform the multiplication with the feedback matrix, after which i put the result into the input/output buffer in the same way.

template <typename FloatType>
void FdnReverbAudioProcessor::applyMixingMatrix(Eigen::MatrixXd& mixingMatrix, AudioBuffer<FloatType>& buffer, AudioBuffer<FloatType>& delayBuffer)
    const int numSamples = delayBuffer.getNumSamples();
    const int numChannels = delayBuffer.getNumChannels();

    Eigen::MatrixXd bufferMatrix = Eigen::MatrixXd::Zero (numChannels, numSamples);

    // delayBuffer -> buffer_matrix;
    for (int channel = 0; channel < numChannels; ++channel)
        FloatType* const channelData = delayBuffer.getWritePointer (channel);

        for (int i = 0; i < numSamples; ++i)
            bufferMatrix (channel, i) = channelData[i];

    // apply mixing matrix
    bufferMatrix = mixingMatrix * bufferMatrix;

    // buffer_matrix -> mainBuffer
    for (int channel = 0; channel < numChannels; ++channel)
        FloatType* const channelData = buffer.getWritePointer (channel);

        for (int i = 0; i < numSamples; ++i)
            channelData[i] += bufferMatrix (channel, i);

As you can imagine, the complexity is ridiculous and even in dual channel setup, i can max out my system by cranking up the delay buffer size. However, eventually i need to be able to support channel counts of 36 or possibly even higher.

So this is where i’m at right now. If any of you can provide some input on how this could be tackled efficiently, i would greatly appreciate the help.


try to avoid allocating a eigen matrix every process block, you should save some resources there


Fair point, thanks!

I will also see if i can’t save the entire channel buffer at once instead of doing it per sample, maybe Eigen supports that somehow.


A simple way to avoid the allocation is simply to make the size a template parameter. I haven’t given such thought to this, I’m currently using a similar design (, where I also populate an Eigen matrix (actually a vector) from the previous delay lines values and use a custom mixing matrix to compute the new value.
BTW, I don’t think you can populate the delay line this way, as it is recursive, so at some point, channel data will have to be reinjected inside the buffer. Your design doesn’t account for this.


I have a separate function that populates the delay buffer from the I/O buffer. :wink:

Thanks for the link!


The issue is that if you have a delay of 10 samples and a buffer of 32, you should make a pass for the first 10 samples, then the next 10, then the next 10 and then the final 2. Otherwise you don’t populate in your delay line the feedback that you should have had.
I don’t see this in your code, so I wanted to point this out.


Ah i see now what you meant, yes i will consider this, thank you for the tip!