Hi there,
I’m currently in the process of implementing a MIMO feedback delay network in JUCE. As i’m a complete noob to JUCE and also a pretty novice C++ programmer, there are a few upcoming design considerations on which I’d appreciate the input of this community regarding feasible solutions.
So here goes, with the help of one of the demos i have managed to get a multichannel delay working and i also have a pretty hacked together solution for multiplying all the channel data with the feedback matrix. The next step for me would be to make the delay length and gains for each channel separate. If i understand the AudioBuffer class correctly, then it does not support separate gains and buffer lengths for individual channels, so this is where it gets tricky for me.
The way that i would go about this is to go use an array of AudioBuffer instances, that is, give each channel its own AudioBuffer instance. For the delay alone, this seems pretty feasible, however, i will need to find a much more efficient way to multiply all the delayed samples with the feedback matrix.
My current “solution” is to iterate over each sample and insert it into a matrix type from Eigen, then perform the multiplication with the feedback matrix, after which i put the result into the input/output buffer in the same way.
template <typename FloatType>
void FdnReverbAudioProcessor::applyMixingMatrix(Eigen::MatrixXd& mixingMatrix, AudioBuffer<FloatType>& buffer, AudioBuffer<FloatType>& delayBuffer)
{
const int numSamples = delayBuffer.getNumSamples();
const int numChannels = delayBuffer.getNumChannels();
Eigen::MatrixXd bufferMatrix = Eigen::MatrixXd::Zero (numChannels, numSamples);
// delayBuffer -> buffer_matrix;
for (int channel = 0; channel < numChannels; ++channel)
{
FloatType* const channelData = delayBuffer.getWritePointer (channel);
for (int i = 0; i < numSamples; ++i)
{
bufferMatrix (channel, i) = channelData[i];
}
}
// apply mixing matrix
bufferMatrix = mixingMatrix * bufferMatrix;
// buffer_matrix -> mainBuffer
for (int channel = 0; channel < numChannels; ++channel)
{
FloatType* const channelData = buffer.getWritePointer (channel);
for (int i = 0; i < numSamples; ++i)
{
channelData[i] += bufferMatrix (channel, i);
}
}
}
As you can imagine, the complexity is ridiculous and even in dual channel setup, i can max out my system by cranking up the delay buffer size. However, eventually i need to be able to support channel counts of 36 or possibly even higher.
So this is where i’m at right now. If any of you can provide some input on how this could be tackled efficiently, i would greatly appreciate the help.