Is some form of FIFO the best option when working with FFT processes

Hi all,

For my processing, I copy the whole audioBuffer to a 2D vector, to then pass each dimension to fft.performForwardOnlyFrequencyTransform()

I see in the tutorials, they implement a fifo, but this is all for visualisation based application.
I am using my fft Transform to extract frequencies to compute the effects of atmospheric absorption, specifically the effects of air impedance.

From running the debugger, I can see that my code runs fine, but when it is in FL Studio, I get crashes, the whole of FL crashes.

I assume this means that I am demanding too much from my audio thread, here is some code,
but my general question is: Is it Imperative to use some form of FIFO with FFT objects? Or does my general approach suffice? (albeit with a need for optimisation)

void AdvancedAtmosphere::processBlock(juce::AudioBuffer<float>& buffer, juce::MidiBuffer&)
{
  auto totalNumInputChannels = getTotalNumInputChannels();
  int numSamples = buffer.getNumSamples();

  std::vector<std::vector<float>> copiedBufferData(totalNumInputChannels, std::vector<float>(numSamples * 2, 0.0f));

  if(!usePorusMedia() && totalNumInputChannels > 0)
  {
    copiedBufferData = copyBufferToVector(buffer);
    extractFrequencies(copiedBufferData);
    //Now, we can apply the gain to the buffer, sample by sample, as we have an average pressure value for the entire buffer
    float airImpeadanceGain = advancedAtmosCalc->getGain();

    for(auto channel = 0; channel < totalNumInputChannels; ++channel)
    {
      auto* channelData = buffer.getWritePointer(channel);
      for(auto i = 0; i < numSamples; ++i)
      {
        channelData[i] *= juce::Decibels::decibelsToGain(airImpeadanceGain);
      }
    }
  }
}

std::vector<std::vector<float>> AdvancedAtmosphere::copyBufferToVector(juce::AudioBuffer<float>& buffer)
{
  
  int numSamples = buffer.getNumSamples();
  int totalChannels = getTotalNumInputChannels();
  std::vector<std::vector<float>> copiedBufferData(totalChannels, std::vector<float>(numSamples * 2, 0.0f));

  for(auto channel = 0; channel < totalChannels; ++channel)
  {
    auto* channelData = buffer.getReadPointer(channel);
    for(auto i = 0; i < numSamples; ++i)
    {
      copiedBufferData[channel][i] = channelData[i];
    }
  }
  return copiedBufferData;
}

void AdvancedAtmosphere::extractFrequencies(std::vector<std::vector<float>>& bufferData)
{
  for (int channel = 0; channel < 2; ++channel) // Assuming stereo channels for simplicity
  {
    int size = bufferData[channel].size();
    
    // Allocate memory with unique_ptr
    std::unique_ptr<float[]> channelDataArray(new float[size * 2]);
    std::copy(bufferData[channel].begin(), bufferData[channel].end(), channelDataArray.get());
    forwardFFT.performFrequencyOnlyForwardTransform(channelDataArray.get());
    computeAirImpedance(channelDataArray.get(), size);
  }
  bufferData.clear();
}

void AdvancedAtmosphere::computeAirImpedance(float* channelData, int size)
{
  
  float angularFrequency;
  float wavelength;
  float timeThruWave;
  float distance = getDistance();
  float pressurePa;

  //Threshold is relative to the maximum magnitude in the FFT, 10% of the maximum magnitude
  float maxMagnitude = *std::max_element(channelData, channelData + size / 2);
  float threshold = 0.1f * maxMagnitude;

  // Iterate through the first half of the FFT results (since it's mirrored in the second half)
  for (int i = 0; i < size / 2; ++i)
  {
    // If the magnitude at this bin is above a threshold, compute its frequency and add to the vector
    if (channelData[i] > threshold)
    {
      float freq = (float)i * getSampleRate() / size;  // Convert bin index to frequency
      angularFrequency = advancedAtmosCalc->calcAngularFrequency(freq);
      timeThruWave = advancedAtmosCalc->calcTimePeriodPos(freq, distance);
      
      //if angular frequency is not NAN, and timeThruWave is not NAN, calculate the air impedance
      if(!std::isnan(angularFrequency) && !std::isnan(timeThruWave)){
        advancedAtmosCalc->calcAirImpedance(angularFrequency, timeThruWave, distance);
      }
    }
  }
}

have you checked the buffersizes you get from FL? If i remember corectly, FL is the one sending random buffersizes to the plugs.

No I havent, but this is one processor of a multiple processor Plugin, and there is nothing in this code that requires some form of fixed size buffer, as the vectors naturally resize to whatever the buffer happens to be, Many thanks for your input :slight_smile:

1 Like

Very basic audio processing rule:

Never use memory allocation in the processBlock scope.

Okay thank you, Apart from that, is this approach viable in your opinion?

I was referring to your code - do you understand where you are allocating?

If you write code like this, I am afraid you are not yet ready to work with Fifos and doing deferred FFTs with them…

Yes I understand where I am allocating, I was referring to if you think my overall architecture is viable, I have been working with audio programming for all of about 2 months, this is my first project so learning as I go

Let me phrase it differently:

I advice not to continue with this approach until you have removed all code that does memory allocation. Until then this approach (architecture?) is not viable.

You declare std::vectors, which allocate internally. If you are not aware of that, do not use them.

In the frequencies method you use new - which clearly also allocates.

Thank you, yes I understand that those are allocations, I did not know that allocating on the audio thread is bad, I shall remove those, many thanks for your time