FFT in Audio Plugin



Firstly I’d like to tell hello to everyone, I’m new at Juce so maybe this question will be senseless… I’m workin at spectrum analyzer. I have problem with solving one problem- if I have two inputs buffer how I should prepare samples for FFT?Should I create two separates lines on my spectrum analyzer for every channel or maybe there is solution for adding this channels together and doing FFT on accumulate samples?


That is up to you, what you want to visualise, and what you think is useful for the user.

The waves one (PAZ) for example lets you switch between the averaged left + right, or the side signal left - right, or one selected only.
Maybe you want to show both spectrums, but that’s twice the processing though.


Thanks for reply!
I’m interested in showing some kind of “averaged” signal from every output channel which is active as a one line on my specteogram. But I don’t know what is the best solution for preparing one FIFO for FFT in this case. I don’t want to lose too much samples from every output buffer. And I don’t want to get big latency of course…

At this moment I can see two solutions (I’m scared that both of them are wrong):

  1. Getting in for loop first sample from output buffor first, second sample from second output buffer etc and filling my FIFO with this samples …but in result I’ll loose a lot of information
  2. Getting first sample from every buffer and average them to one value then putting it into my fifo…and the same for second etc…But I’m scared that this will give me weird results…


Your problem will be, that one buffer is not enough data. It becomes obvious with some simple equations:

Even a big buffer of 1024 samples at 48000 Hz is just 21 ms long. The lowest frequency you can perceive with 1024 samples in 48 kHz is 46 Hz, which sounds not that bad. But often people want to work at rather 256 samples to reduce latency, in which case the lowest frequency is 187.5 Hz. So you see, to be independent from the user settings you will have to accumulate over a few buffers.

If you want to show an averaged signal, the best thing to do is to average them in time domain (just adding and afterwards normalising buffers) and then compute the FFT. The result should be the same as FFT each buffer and averaging the amplitudes.


Couple ideas.

You can average samples of each channel together (sum to mono). The catch is that there will be phase cancellation which may or may not be a problem for your analyzer. If the source audio for instance has a Haas effect applied the spectrum will show comb filtering that doesn’t really exist.

You can also show the difference of the two channels (convert to a “side” channel). It’s interesting from a data analysis perspective but not perfect for the user to see anything intelligible.

An alternative to both is to recognize that the perceived spectrum is going to be affected by masking, in other words the stronger signal will be more apparent. A very naïve way to deal with this is to take the FFT of both channels, and plot the bin of whichever channel has the highest magnitude.

The flaw there is assuming that both channels are correlated. You can always just plot two lines, or have two plots.

If the goal is a simple analyzer that tells the user in general what’s going on, using a mono sum or plotting the channel with the highest bin is probably the best. If you want a super accurate display, having separate channels is a good option.


A good way to avoid the phase cancelling problem is to input the left and right channels to the real and imaginary inputs of a complex FFT - the resulting magnitude will be correct (no cancelling because of the relative phase of the channels at each frequency).


You sure about that? DFT {x + y} = DFT{x} + DFT{y}. There should be no difference between summing in the Fourier domain or in the time domain.


Thanks guys for your replies, they open my mind. I’ll give you my solution when I’ll implement it in a couple of days.