Plugin DSP Filter Architecture

Hi All,
I Have a question about the suitability of my plugin architecture.
I am computing the effects of atmospheric absorption on audio signals, I am trying to do this on 1/3 octave bands, I have calculated my Q value via Q = Fc / BW
Where Fc = centre freq of band, BW is the bandwidth of the filter, this gave me a q value of:
1.12246204830937 for all filters, I also changed q to 0.5 for testing purposes, but in both cases I get distortion when running the plugin with audio input, However, when I do not set filter Coefficents, and just process the whole audio buffer without filtering, I get no distortion and the plugin works perfectly. My question is: is my attempt to filter the whole audible frequency spectrum via 1/3 bands a bad approach, or is there issues in my coding implementation.
Many thanks all,

void AirAbsorption::prepareToPlay (double sampleRate, int samplesPerBlock)
{

  for (size_t i = 0; i < octFiltersFloat.size(); i++) {
    octFiltersFloat[i].coefficients = juce::dsp::IIR::Coefficients<float>::makeBandPass(sampleRate, octaveMidFreq[i], qFloat);
    octFiltersDouble[i].coefficients = juce::dsp::IIR::Coefficients<double>::makeBandPass(sampleRate, octaveMidFreqDouble[i], q);
  }

  juce::dsp::ProcessSpec spec { sampleRate, static_cast<juce::uint32> (samplesPerBlock), 2 };
  //===Prepare the oct filters===
  for (size_t i = 0; i < octFiltersFloat.size(); i++) {
      octFiltersFloat[i].prepare(spec);
      octFiltersDouble[i].prepare(spec);
  }

  

  //clear the buffers to avoid noise
  for (auto& buffer : octBuffersFloat) {
    buffer.setSize(2, samplesPerBlock);
    buffer.clear();
  }

  for (auto& buffer : octBuffersDouble) {
    buffer.setSize(2, samplesPerBlock);
    buffer.clear();
  }
}

void AirAbsorption::processBlock (juce::AudioSampleBuffer& buffer, juce::MidiBuffer&) 
{
  auto numSamples = buffer.getNumSamples();
  auto numChannels = buffer.getNumChannels();


  assert(octBuffersFloat.size() == octFiltersFloat.size());

  juce::AudioBuffer<float> tempBuffer;
  tempBuffer.makeCopyOf(buffer, false);

  for (size_t i = 0; i < octFiltersFloat.size(); ++i) {
      for (int channel = 0; channel < numChannels; ++channel) {
          auto* data = tempBuffer.getWritePointer(channel);
          for (int sample = 0; sample < numSamples; ++sample) {
              data[sample] = octFiltersFloat[i].processSample(data[sample]);
          }
      }
      octBuffersFloat[i] = tempBuffer;
  }

  std::vector<juce::SmoothedValue<float>> gainsVector = calculateGainFloat();

  for(int i = 0; i < gainsVector.size(); ++i){
      std::cout << "float gains Vector: "  << gainsVector[i].getNextValue() << std::endl;
  }

  for (size_t i = 0; i < octFiltersFloat.size(); ++i){
      for (int ch = 0; ch < numChannels; ++ch){
          size_t idx = static_cast<size_t>(ch); //this is safe as its always non-negative
          auto* data = octBuffersFloat[idx].getWritePointer(ch);
          for (int sample = 0; sample < numSamples; ++sample){
            
              data[sample] = data[sample] * juce::Decibels::decibelsToGain(gainsVector[i].getNextValue());
          }
      }
  }
  gainsVector.clear();

  auto addFilterBand = [nc = numChannels, ns = numSamples](auto& inputBuffer, const auto& source){
      for (auto i = 0; i < nc; ++i){
          inputBuffer.addFrom(i, 0, source, i, 0, ns);
      }

  };

  for (size_t i = 0; i < octBuffersFloat.size(); ++i) {
  addFilterBand(buffer, octBuffersFloat[i]);
  }

  float maxAmplitude = 0.0f;
  // Find the maximum amplitude after summing all bands
  for (int channel = 0; channel < numChannels; ++channel) {
      auto* data = buffer.getReadPointer(channel);
      for (int sample = 0; sample < numSamples; ++sample) {
          maxAmplitude = juce::jmax(maxAmplitude, std::abs(data[sample]));
      }
  }

  // If the maximum amplitude exceeds 1.0, scale down the entire buffer
  if (maxAmplitude > 1.0f) {
      buffer.applyGain(1.0f / maxAmplitude);
  }
}

It looks like you’re processing the left and right channels with the same filter instance. You need two separate filter instances, one for each channel.

Thank you for your response

Why is it an issue to use the same filters for left and right channels, I want the exact same processing and filters for both L and R, so why does using the same filters not work? Not doubting you purely interested in what is going on !

Filter implementations are stateful, in order to work they store a small portion of the signal history and expect you to feed in a continuous signal. If you use the same instance for multiple channels, the filter won’t see a continuous stream of samples from a channel but an interleaved stream of samples from the different channels, which won’t make up a continuous signal history anymore and lead to audible artefacts

1 Like