FFT Amplitude

	FloatVectorOperations::multiply (fftSamples, hannWindow.get(), numSamples);
	fft->performFrequencyOnlyForwardTransform (fftSamples);

	// Correct for power loss due to Hann(ing) window with 2.0 = 6.02 dB
	// http://www.wavemetrics.net/doc/igorman/III-09%20Signal%20Processing.pdf - page III-247

	FloatVectorOperations::multiply (fftSamples, 2.0f, numSamples);
1 Like

Are there any papers on a good implementation of the above type of scope? Just doing an FFT is not great because you end up with poor low-end frequency resolution and way too many points for the high end due to it being linearly spaced.

Nammick’s screenshot is a good example. Most likely we want to build a logarithmic graph but half the data points will be for above 10 kHz. Definitely, you’d be better off drawing every second point or averaging them above a certain frequency to reduce the amount of drawing.

I’ve heard of running two FFTs at different rates to compromise but I’ve never seen a guide/tutorial for that.

Not knowing much about FFTs, this has been interesting. This makes me wonder about a related question: Is there a way to use FFT (or anything else) to measure the amplitude of a specific frequency?

The reason I ask is that I have a need to measure the amplitude of several (5 or 6) specific frequencies and because of the bin spacing, using FFT as is makes it a little awkward. Any suggestions would be appreciated.

Due to the uncertainty principle you will always have a trade-off between time- and frequency resolution. The more audio samples you use (bad time-resolution) the better your resolution in frequency -> better discriminability of two frequencies. You always will have a spill from adjacent frequency components, that’s why windowing (https://en.wikipedia.org/wiki/Window_function) can help reducing some side-lobes in the analysis.

If you know the exact values of the frequencies you want to measure, you can also use a DFT, or just multiply a complex exponential function with exactly that frequency with the signal and sum up the values (that’s basically DFT). However, you will get better (closer to the truth) results the more samples (time) you use.

Why not use band-pass filter(s) in the time domain?

You can run a parallel down-sampling, say focusing on stuff below 600-1000 Hz, and use the FFT results from that for the lower frequencies.



@peter-samplicity When you apply windowing do you only apply it to the first half of the fftSamples?

You should always apply the whole window, otherwise you will get some unwanted artifacts within the analysis. When windowing, don’t forget to overlap the blocks, otherwise you would miss half* of the audio.

*depends on the window you use

@danielrudrich - Petes example he has FFT size and numSamples which looked like too different values. - I’m nearly there with this. Think I need to do some value averaging as frame rate is high and very aggressive…

See video

1 Like

Playing devils advocate: I know that is true in theory, but since you measure a theoretically continuous function, would it matter for an analyser?

I just used the Juce Windowing class, so I hope it does the right thing. But am I right in believing that the window is applied on the input samples? When there is no imaginary part (yet)?

That’s what I’m trying to figure out. FIR filters I usually apply to the taps but being though the taps are multiplied by the samples anyway it doesn’t matter but there is ambiguity with FFT.

From reading @peter-samplicity PDF resource they infer that you apply a window directly to the samples before FFT not post FFT and definitely not the full FFT data (r & i).

Yes it definitely would. Imagine a hi-hat-hit (nice one) coincidentally occurring at the beginning of your audio samples block. With applying a Hann window (it’s Hann not Hanning btw :wink: ) you would suppress the hihat signal, as a Hann window starts from zero. So you would loose the hihat in your spectrum. You can circumvent that with overlapping your input blocks e.g. by 50% (perfect for Hann!). Then in one block, the hihat is gone, in the other (here the previous one actually), it’s there, as the window is max in the middle.

Well, Juce Windowing class is doing whatever you tell it to do :slight_smile:

In general: If you want process a 1024-Point FFT and use windowing you will do the following:

  • get 1024 samples of audio data (which is real)
  • apply (multiply) a window with length 1024 to it (which is also real, output will be real)
  • perform the FFT -> you will get complex data for each bin

If you perform a FFT on real data, the complex values for positive and negative frequencies will be complex conjugated (switching the sign of the imaginary part). So that’s redundant information, isn’t it? That’s why performRealOnlyForwardTransform(...) won’t give you that redundant information at all. If you process your complex (~half of the) FFT data and want to go back to the time domain with the inverse FFT, the performRealOnlyInverseTransform(...) method will go like: “ah he/she’s giving me only positive frequency data, that’s fine, I am going to conjugate it, flip it, and get my negative frequencies out of it, I am so clever”) and then a regular IFFT is performed.

That’s very convenient, as we only have to process half data, and won’t lie sleeplessly in bed at night and think “negative frequencies?!”.

So: apply the window before FFT! Overlap, when using windows (also true on mac), otherwise you will loose content you might want to analyze. Overlapping is quite a pain to implement, I’ve done it here: https://github.com/DanielRudrich/FftWithHopSizeAndHannWindow

You can of course apply another window in the frequency domain, e.g. for EQ. However, that’s then equal to a convolution in the time-domain, which will append samples to your block (with FFT -> IFFT the blocklength will the remain the same), so if you want to do it mathematically correct, you will have to do something different (a little bit).

get real block(a) -> apply window -> FFT(a);
get real block(b) -> apply window -> FFT(b) // where block(b) offset by half block size;
output bins = FFT(a) + FFT(b) / 2

get real block(a) -> apply window;
get real block(b) -> apply window; // where block(b) offset by half block size;
shift block(b) back inline with block(a) and average
output bins = FFT(block(avg))

Sorry for pseudo code it helps having a mental map

Is IS Hann, often mispelled as Hanning because there is also a Hamming function!

1 Like

Just that. You will just get twice as many FFT data you can visualize / process one after another.+

Edit: You can of course add the output FFT(a) + FFT(b), however what would you do then with the next block FFT( c )?

I think I have cracked it. My FFT display seems to be behaving as it should now and updating less aggressively. I have simplified the overlapping window approach using a custom delay line

void AudioGraph::addFFTSample(float sample)
    // old method where every sample is passed to FFT window
    // when fftTick = half size
        if (! nextFFTBlockReady)
            zeromem (fftData, sizeof (fftData));
            if(delay.getStep()==0) {
                memcpy (fftData, delay.getSamplesPointer(), fftSize);
            } else {
                memcpy (&fftData[delay.getStep()], delay.getSamplesPointer(), fftSize);
                memcpy (&fftData, &delay.getSamplesPointer()[delay.getStep()], fftSize);
            nextFFTBlockReady = true;
        fftTick = 1; // set to one because already processed first sample in this round
    else {

then my code for generating the FFT render

void AudioGraph::paintSpectrogram(Graphics& g, const double sampleRate)

    FloatVectorOperations::multiply (fftData, windowData, fftSize );
    forwardFFT->performFrequencyOnlyForwardTransform (fftData);
    // Correct for power loss due to Hann(ing) window with 2.0 = 6.02 dB
    // http://www.wavemetrics.net/doc/igorman/III-09%20Signal%20Processing.pdf - page III-247
    FloatVectorOperations::multiply (fftData, 2.0f, fftBlockSize);

    Path p;
    for(int x=1;x<fftSize;x++)
        double freq = (sampleRate) / (double)(x);
        double linear = fftData[x];
        double db = Nano::DSP::Math::linToDb(linear);
        db += Nano::DSP::Math::tiltVolume(1200.0, -3.0, freq); //brings high end into useful range 
        double thisX = getWidth() - frequencyToX(freq);
        double thisY = logAmplitudeToY(db);
        p.lineTo(thisX, thisY);
    p.lineTo(getWidth(), getHeight());
    p.lineTo(0, getHeight());
    g.setColour(Colour::fromRGBA(0, 255, 0, 100));
    g.setColour(Colour::fromRGBA(70, 240, 70, 200));
    g.strokePath(p, PathStrokeType(1));

Thanks again for all the help guys…

May I give you one more tip?

Refactor the creation/updating of the path out of the paint() method. Make the path a member variable and update it in some method updatePath(), which at its end calls repaint() to invalidate the area. Otherwise other components or the editor may trigger your paint which recalculates the FFT. I don’t think this is actually any bad, but I find this personally a better approach to split stuff. Update the path after the FFT and let the paint() simply handle the painting.

1 Like

Cheers that was actually my next step. I have a little added a couple of custom methods to the component which allow me to dictate when it gets redrawn.

Notice when I move the handles around the FFT display goes off from time to time. Thats because the redraw method get called but the nextFFTBlockReady is false. My thought was to just re render whatever path when necessary. Thinking the path will always persist on the ui thread.

1 Like

One more (minor) performance tip: when building a path with lots of points, use Path::preallocateSpace() to avoid the path object having to resize itself as it grows.


You beat me! :slight_smile: