FFT: communication between Processor and Editor

Hi everyone, hope all is good! :slight_smile:
I’m creating a multiband compressor plugin. I’ve finished the DSP and almost finished the GUI, but the last thing i wanna make is the spectrum analyzer.
I’m following this JUCE tutorial but i’m not sure how to “send” the samples from the processBlock in the AudioProcessor to the Editor (better, to the SpectrumAnalyzer class, applied to a component in the GUI).

I thought about using a FIFO array where i push my samples, one at a time, and when my component needs to be repainted i take a snapshot of that array (using a public pointer so i can access the array from the editor) and i use it as input for the FFT.
But, by doing this, i’m adding work to do to the processBlock cause for every sample in the buffer i have to shift every element of the array by 1 and add the new sample at the end…

There’s for sure a better implementation, do you know how should i do? The fact that the processor and the components work at different rate it’s confusing me on all this.

Sorry, it’s my first time doing FFT, maybe it’s a simple question for you guys :frowning:
Thank you for the attention!!!

Spoiler alert: there is no easy-out-of-the-box way of building FFT analyzers.

The scope of the subject is too wide to fit in a forum post. Have a look at this:

https://www.programmingformusicians.com/simplembcomp/

1 Like

This is why in audio we use circular buffers to implement the FIFOs. :wink: Now all that shifting is not needed. The processBlock just writes into the circular buffer and when it gets to the end, the write index wraps back to the beginning.

You still need to come up with some scheme that lets the UI read from this circular buffer so that it’s never reading anything that processBlock is writing at the same time.

1 Like

I saw his videos on YouTube and he’s really good! Thank you, will take a look!

Oh ok yeah i don’t know why i didn’t think about that… Thank you!!!
Yep i’ll setup something like a mutex for that buffer :slight_smile:
Thank you both for your time!!!
Have a great day!

That is the wrong solution, as you don’t want to block the audio thread using a mutex. I think alternative solutions have been discussed previously on these forums.

1 Like

I use a circular buffer, you can only read blocks that have already been written. this only uses memcpy so it’s very fast. Only the cursor is shared.

This is the class I use, I asked chat gpt to comment on it, and he seems to have got it right.

/**
This class represents a circular buffer for audio data.

The CircularBuffer allows storing and retrieving audio samples in a circular
fashion. It provides functions for preparing the buffer, pushing new
blocks of samples, and retrieving previously written samples.

The size of the buffer is determined by the duration and sample rate
specified in the `prepareToPlay` function. The `pushNextBlock` function
is used to add new audio samples to the buffer, and the `getData` function
retrieves previously written samples.

The buffer can handle multi-channel audio data.

Usage:
- Call `prepareToPlay` before using the buffer to set the size based on
  the desired duration and sample rate.
- Use `pushNextBlock` to add new audio samples to the buffer.
- Use `getData` to retrieve previously written samples.

Note: Make sure to call `prepareToPlay` before using the buffer.
*/
class CircularBuffer 
{
    std::unique_ptr<juce::AudioBuffer<float>> data;
    int size = 0;
    std::atomic<int> cursor;
    double sampleRate;

public:

/**
    Prepare the buffer for playback.

    @param sampleRate    The sample rate of the audio.
    @param duration      The duration (in seconds) of the buffer.
    @param numChannels   The number of audio channels.
*/
    void prepareToPlay(double _sampleRate, double duration, int numChannels)
    {
        sampleRate = _sampleRate;
        size = static_cast<int>(sampleRate * juce::jmax(1.0, duration));
        data.reset(new juce::AudioBuffer<float>(numChannels, size));
        cursor = 0;
       
    }

/**
    Push a new block of audio samples into the buffer.

    @param buffer   The audio buffer containing the samples to push.
*/
    void pushNextBlock(juce::AudioBuffer<float>& buffer)
    {
        jassert(size > 0); // Size must be greater than 0. Make sure to call prepareToPlay() before updating the buffer.
        jassert(buffer.getNumChannels() == data->getNumChannels()); // Number of channels must match between buffers.

        int numSamples = buffer.getNumSamples();
        int numChannels = buffer.getNumChannels();

        for (int channel = 0; channel < numChannels; ++channel)
        {
            const float* channelData = buffer.getReadPointer(channel);
            float* bufferData = data->getWritePointer(channel);

            int end = cursor + numSamples;

            if (end < size) {
                // Copy the entire block in one go
                std::memcpy(bufferData + cursor, channelData, numSamples * sizeof(float));
            }
            else {
                // Wrap around to the beginning and copy in two blocks
                int firstBlockSize = size - cursor;
                int secondBlockSize = end - size;

                std::memcpy(bufferData + cursor, channelData, firstBlockSize * sizeof(float));
                std::memcpy(bufferData, channelData + firstBlockSize, secondBlockSize * sizeof(float));
            }
        }

        cursor = (cursor + numSamples) % size;
    }

/**
    Retrieve previously written audio samples from the buffer.

    @param channel      The channel index to retrieve samples from.
    @param destBuffer   The destination buffer to copy the samples to.
    @param numSamples   The number of samples to retrieve.
*/
    void getData(int channel, float* destBuffer, int numSamples)
    {
        jassert(destBuffer != nullptr); // destBuffer must be non-null

        if (numSamples > size) numSamples = size;

        int firstSample = cursor - numSamples;

        if (firstSample >= 0) {
            const float* source = data->getReadPointer(channel) + firstSample;
            std::memcpy(destBuffer, source, numSamples * sizeof(float));
        }
        else {
            firstSample += size;
            int subBlockSize = size - firstSample;

            const float* source1 = data->getReadPointer(channel) + firstSample;
            std::memcpy(destBuffer, source1, subBlockSize * sizeof(float));

            const float* source2 = data->getReadPointer(channel);
            std::memcpy(destBuffer + subBlockSize, source2, (numSamples - subBlockSize) * sizeof(float));
        }
    }

};

to use it, in prepareToPlay:

circularBuffer.prepareToPlay(sampleRate, 4, getMainBusNumOutputChannels());

in processBlock

  circularBuffer.pushNextBlock(buffer);

in editor

 audioProc->circularBuffer.getData(0, sampleData, sampleSize);

This can be freely used anytime, anywhere, for any representation of audio data such as an oscilloscope or meter. getData would receive any size of the last available samples

2 Likes

Yeah, for sure “mutex” is not the right word for what i meant to do lol. My solution wouldn’t block the audio thread, but thanks for the advice!!! :wink:

Wow! Thank you so much! I’ll save this <3

As kerfuffle said, you have to make sure that you don’t get parts of the buffer that are being written. I did not incorporate any measurements since the buffer is several seconds and I will never read more than a few thousand samples. I suppose something like limiting the number of samples read to half the buffer would be enough: if (numSamples > size/2) numSamples = size/2; and maybe also limit by half the number of samples that will be written

1 Like

If you are performing FFT on audio data for a visualization. You can take advantage of the fact that the screen will only update 30-60 times per second. So don’t push all samples to the GUI, push enough to do one FFT, then wait for 1/30th of a second before pushing another frame. This will save a ton of memory bandwidth.
For bonus points: Don’t push any data onto the FIFO until you have detected that the GUI has read the previous fram of data. What this does is ‘throttle’ the system during times when the UI is heavily loaded and can’t keep up. Which saves CPU and memory bandwidth, esp when the system is already struggling under high load.

2 Likes

@Marcusonic @JeffMcClintock Sorry for the late reply. Thank you so much for your advices!!! I’ll adjust everything!
(already did the “bonus” optimization, ty!)