When visualising frequencies does it matter if some audio is missed in the fft

Hi everyone, I was looking at the fft visualising tutorial and they use a fifo. If the fifo becomes full and new incoming data gets dropped does this matter for the fft? Or can the fft just operate on what is in the buffer at the time we want to display the visualisation.

(Basically having non-lapping sections of audio being processed by the fft?) Thanks.

That does not matter. The only problem is that your spectrum refresh rate might be low (e.g., 2048 on 44100Hz → 22Hz).

You can use double buffering on a circular buffer. Then you can run FFT at any arbitrary rate (background thread recommended).

1 Like

It doesn’t matter, but I think that the best visualization result is a high frame rate (I currently use OpenGL so that it is at the refresh rate of the screen) and that it is displayed as soon as there is a new sample. These almost duplicate blocks create a kind of interpolation that appears to show more information, since each individual block may have some loss due to windowing. But it is a personal perception, I don’t know there is any science behind this.

The eye is not fast enough to see every frame. However, I would recommend to add some ballistics to the FFT visualiser, so a frame even if it wasn’t seen individually, will be noticeable. That way the user doesn’t miss peaks, the overall picture describes better what’s going on in the audio.

1 Like

Thank for the responses everyone.

I am still a bit confused about something. From my understanding after reading the replies you need to overlap the audio buffers by say 50% before going into the fft in order to make up for the power loss from doing a windowing function.

I’m wondering though say my fft size is say 2048 do I want to make my fifo buffer 2048 also? Because say I am running ffts on a background thread and the background thread gets held up because the fft is taking a long time to process won’t I be potientially missing audio (because the fifo is full and I am not reading) in which case the buffers will no longer be overlapping?

Or do I want my fifo buffer much bigger than fft size?

I guess maybe the time to run the fft should faster than the time to collect audio buffers required to fill the fifo up?

Yes, the FIFO buffer size is also 2048. And you don’t have to care about the over-lapping since you use it for visualization. BTW, if the fps is 60, the chance of missing any audio sample would be very small (unless the background thread gets stuck).

From my point of view, it doesn’t matter. Most users will view 60 fps FFT as accurate and responsive.

I use a circular buffer and I don’t have to worry about anything, the size of the FFT or the frame rate can vary at any time, it always adapts since all it does is take the latest data from the circular buffer of the size of the FFT.

It works like this, on the one hand the audioprocessor adds a new block of processed samples of any size to the circular buffer, and updates the writing pointer at the end. On the other hand and without the need for synchronization or waiting, the viewer takes a block the size of the FFT to the write pointer, this can produce a lot of overlap if the fps are high, or lose data if the fps are low, but it will only affect display accuracy.

A higher fps and therefore a greater overlap will allow some frequencies to be maintained for some frames and allow their visualization, which is why I think that the best is many fps and large buffers, even 4096 samples. This will also allow much greater precision in the frequency domain, although you will lose some precision in the time domain, so ideally the size can be set by the user to find the balance. As I said, with a circular buffer you can vary the size at any time without needing to restart.

Would you like to share how do you make the viewer (or the background thread) non-waiting? Currently I am using double buffering, which will lock the background thread while the processor is adding samples to the circular buffer.

The viewer only takes the block before the writing cursor, the processor only writes to the buffer starting from the writing cursor, which when it finishes writing the block then updates the writing cursor to the new position writeCursor+sizeBlock. so the cursor sets the limit for one and the other. The circular buffer must be large enough so that the end of the write does not overlap with the beginning of the read.

By the way, I don’t worry about threads either, since this always draws continuously in the openglComponent render function. If I want to reduce usage, I use openGLContext.setSwapInterval. So everything is as simple as in render function taking a block before the writing cursor of the circular buffer, processing it to get the FFT, converting the data to vertex heights, and drawing.

Thanks for the info. My current implmentation use the audio process to write data, use a background thread to process FFT and use the message thread to generate the paths. It is a bit too complicated for me :innocent: I will try your approach once I run into issues.

I think that with juce 8 now it would be best to use direct2d to draw and a timer to update the paint function that will obtain the data, process it and draw, and you don’t need any thread. but I’m not sure how direct2d works