Hi there, JUCE developers.

I’m working on an Audio Application which records audio from an stereo input and obtains the third-octave band levels as well as some other parameters related to Interaural Cross-Correlation (correlating left and right channels for different delay times). So far I’ve followed the JUCE Recording example, so my recordings are perfectly fine. I decided to try the FFT analysis in another app, it works fine as well.

Then I decided to combine both algorithms. However, I’ve struggling to find the best way compute the FFT, separate the magnitude values for each third-octave band and save them.

I’m calling the pushNextSampleIntoFifo() method on the AudioDeviceIOCallback(), right after adding the buffer to the WAV file using the ThreadedWriter. Both the samplesToBuffer and the fifoSize are the same (32768), so I would expect to compute the FFT for each buffer. When I call the drawNextFrameOfSpectrum() method (just to obtain the magnitude values of each frequency) the recorded audio is messed up, like it doesn’t record every sample they way it used to before adding the frequency analysis. I can see now that these algorithms cannot be called from the AudioDeviceIOCallback directly.

i guess my question is: How can I record audio while simultaneously computing complex signal analysis algorithms (FFT and cross-correlation)?|