I’ve built a stereo reverb using an array of Comb Filters but I’m having trouble coming up with a good way of setting up SmoothedParameters. It seems that I have to set up one for each channel as the .getNextValue() function pops the last value meaning I can’t reuse it for subsequent channels. The issue is that with n-number of CombFilters that means I need to have numberOfCombs * numberOfChannels SmoothedParameters, which doesn’t seem very performant. Code below. Any suggestions appreciated, thanks!
// Definition of n * m matrix
std::array<std::array<juce::SmoothedValue<float>, 8>, 2> stGainValues;
// This is called every time the parameter is moved
void setGain(float newGain)
{
if (gain!=newGain) {
gain = juce::jlimit(0.01f, 1.0f, newGain);
for (int i = 0; i<getSize(); i++) {
for (int j=0; j<2; j++)
stGainValues[j][i].setTargetValue(gain);
}
}
}
/// Called once per sample
void processSample(float sample, int channel) {
for (int i = 0; i<getSize(); i++) {
// rest of code removed for brevity
combFilters[i].setGain(stGainValues[channel][i].getNextValue());
}
}
This is the wrong way, since it will make the parameter jump between channels:
for (loop through channels) {
for (loop through samples) {
smoothen parameter
process sample
}
}
You can change the order of the loops:
for (loop through samples) {
smoothen parameter
for (loop through channels) {
process sample
}
}
You can make separate loops for mono and stereo:
if (stereo) {
for (loop through samples) {
smoothen parameter
process left sample
process right sample
}
} else {
for (loop through samples) {
smoothen parameter
process mono sample
}
}
You can put the smoothed values into a buffer:
for (loop through samples) {
smoothed_param[i] = smoothen parameter
}
for (loop through channels) {
for (loop through samples) {
process sample using smoothed_param[i]
}
}
You can reset the smoother for the right buffer so it restarts from the right place, although I’m not sure the JUCE one lets you do this easily:
for (loop through channels) {
if (channel == 0) {
current_value = smoother.getCurrentValue()
} else {
smoother.setCurrentValue(current_value)
}
for (loop through samples) {
smoothen parameter
process sample
}
}
@kerfuffle
I’m curious to see a benchmark between different smoothing techniques. Especially the one where you put all samples in a buffer could be very efficient, for instance when there’s no smoothing active you can just fill it with constant values.
I also use the methods of running the smoother into a buffer and having unique code for audio processing while smoothing or not. sometimes sample accurate smoothing is not needed or cpu intensive, then you can just downsample the smoother to smaller blocks. comb filters of same size can also be optimized by sharing the same writehead for all channels
Between switching the order of operations and only smoothing when needed, I was able to cut down my parameters (and CPU load) in half. Thank you!
@Mrugalla That write head trick sounds very useful, what’s the idea behind it? Right now the second biggest performance issue is the number of comb filters operating independently.
You can avoid indexing the audio sample from the audio buffer multiple times (it may not be exactly what @Mrugalla referred to, but it can significantly reduce CPU if there are multiple filters):
const auto writerPointer = buffer.getArrayOfWritePointers();
for (int i = 0; i < buffer.getNumSamples(); ++i) {
for (int channel = 0; channel < buffer.getNumChannels(); ++channel) {
auto sample = *(writerPointer[channel] + i);
for (size_t filterIdx = 0; filterIdx < currentFilterNum; ++filterIdx) {
sample = filters[filterIdx].processSample(static_cast<size_t>(channel), sample);
}
*(writerPointer[channel] + i) = sample;
}
}
every delay’s ring buffer has a write head and n read heads (in case of fb delays usually just 1). but the write head only depends on the block- and delay size and produces the same values for every delay, so it doesn’t have to be part of a delay if you want to use multiple ones