FFT : spectral transformation basic example

Hi, I got a bit of difficulties to use the FFT plugin to manipulate the audio on the spectral domain.

I need to get the frequencies and the phase from a time signal, process them, and convert the signal back to the time domain to play it.

Could someone provide a simple example?

Thank you,


You need windowing + overlap add. (I haven't looked the FFT plug-in, but I'm guessing you would still have to implement this yourself).

Have a look at Stefan's code here:


It's hard work getting your head around, if you are not familiar with the math. I can help you out on the IRC channel if you like.


1 Like

Thank you for your reply,

Indeed I will have to implement that myself, and math are not an issue. But I got trouble with the FFT plugin itself.

I would need an example of a signal transformed and reversed back, using the FFT plugin.

Thank you again,


Looking at https://www.juce.com/doc/classFFT ...

First create an order 10 forward FFT

That's going to be 2^10 = 1024 bins, or 2048 input samples

Now create float x[2048], give it a waveform -- maybe a sawtooth with only 1 tooth. So, a ramp from -1 to 1.

Throw it into performRealOnlyForwardTransform

Convert to complex: auto z = (Complex*)x;

Now you have your frequency bins.

If you want to make a crude lowpass filter for example, set z[512 onwards] to 0

Now convert back to the time domain with performRealOnlyInverseTransform

Try doing a basic loopback and check you get back out what you put in.

Once you have that working, you are ready to work on a big waveform: you will be stepping a size-2048 window through this big waveform in steps of 512. So 4x overlap.

For each frame, you want to envelope/window it (you could Google hanning window)

 then fft, filter, ifft

 and then add the result to your output waveform at that location.

 If you get stuck, post code!


1 Like

Sure, this is my code. It just does an fft and reverse it. But the result is just random floats, not even normalized.

If I remove the 2 transforms the sound is right.

Really, just a couple of lines as an example of a transform and a reverse transform would be nice, even without the overlapping.

void MainContentComponent::getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill)
    mInstrumentPlayer->getNextAudioBlock(bufferToFill); // fill the buffer with with audio

    int numSamples = bufferToFill.buffer->getNumSamples(); // 512 here

    for (int channel = 0; channel < 2; ++channel)
        float* channelData = bufferToFill.buffer->getWritePointer (channel);

        FFT lFFT(9, false);

        float    * freqArray = new float[numSamples * 2];
        memcpy(freqArray, channelData, numSamples * sizeof(float)); // fills the first half of the array with the signal


        bufferToFill.buffer->copyFrom(channel, 0, freqArray, numSamples);

Thank you again,


512 samps ~ 256bins. So FFT lFFT(8, ...);

The documentation should be tidied up here:


"The the number of points the FFT will operate on will be 2 ^ order."

^ it should say "number of complex bins"

 it should also say that 2 real datapoints or 1 complex datapoint generate 1 complex bin.

 I think!

You should first be testing on a known waveform. Only put it into the render callback once you know you have it working!

 Also jassert( bufferToFill.buffer->getNumSamples() == 512 ); would be advisable. I'm not sure this can always be relied upon.


1 Like

Still not working with FFT lFFT(8, ...), or with a know wave form, like a sinus.

JUCE seems nice so far. Why is this simple processing so complicated? I will read the source code and figure things out.


There is a demo at https://github.com/julianstorer/JUCE/tree/master/examples/SimpleFFTExample/Source

Does that help?


PS If you can construct a minimal example demonstrating failure, I recommend posting it! Then the team can sort it out if it is a bug. Looking through the FFT source code is going to be really hard work!

Unfortunately this example only does a forward transform.

I will post the solution here if I find it.

Thank you,


Hello, BaptisteB

I have this project coming up where we have to create a phase vocoder.
I was wondering why are you using getNextAudioBlock() instead of processBlock().
I think processBlock() applies more to a VST plugin, which is what i’m supposed to make.
Do you think I could/ should use what you’re using?

Thank you,

@HectorPerez I didn’t know about this AudioProcessor class. Looks like it is appropriate in your case.
However, I found out how to make a fft/ifft.

See this topic: Issue with FFT plugin - Inverse Transformation