I am trying to perform an FFT (FFT is working) for a plug-in, but I want to take a window length of at least one second. I want the FFT to be calculated in smaller chunks, requiring multiple callbacks while the FFT is calculating. I would like to do this in a threaded fashion (as that’s how I assume it should be done), but I am completely lost. Any feedback at all is greatly appreciated.
Conclusion: You can do it in smaller blocks, but you still need the whole set of data (it makes sense if you think about it). You will also severely be crippling the performance of the FFT routine (it wouldn’t even be an FFT), in the end I found it to be not worth it at all.
If you elaborate on the use case, there may be other solutions.
Thanks, I had seen that before but I haven’t really tried implementing that particular solution yet. I did not mean that I want to calculate it in chunks like that, but rather I want to calculate it in between the high priority audio callback.
I timed my FFT in C++ on my computer and it takes 30 ms. Obviously that is too long to do without noticing artifacts in the audio. So I don’t want to calculate small FFTs and combine them into a larger FFT, I want to calculate fractions of the larger FFT in between audio callbacks and update the result every 50 ms or whatever it takes.
Okay, that wasn’t really how your question was phrased, but you’re in luck anyway. This is very easy, you bascially have a consumer/producer model: Your processing function produces samples, and the consumer thread eats them. When the consumer has eaten enough samples (say, 44100 for one second), you run the FFT procedure on the consumer thread. You haven’t specified what you actually need the data for so it’s hard to suggest what to do next.
Your basic building block is the SPSC queue - single producer, single consumer. There are a variety of free implementations out there - I use the one from moodycamel.
As said in my previous reply, you cannot split the operation up in chunks sequentially in a audio callback, unless you have all the data available in which case you’re basically being inefficient, the threaded solution is better in every aspect.
I am trying to distinguish musical notes with high accuracy, hopefully that is enough of an explanation for needing a large FFT.
When I said split it up into chunks, I was hoping the threading would do that for me. I am not a programmer by nature though, so I am not familiar with threading. I was hoping that it would be as simple as starting the FFT function I have already (which works) as a thread, and as long as the thread is set up properly it should work.
I am looking into moodycamel now, thanks for the advice.
The threading does not split it up into chunks, instead it will just happen in the background. For pitch detection and similar topics, there are much better ways of getting resolution than increasing fourier transform size. Well, from a spatial + performance perspective at least. The simplest method is phase interpolation of bins from two transforms, which in theory gives you arbitrarily precise frequency detection, so long as your possible frequency content’s spacing does not overlap in bins.
Notice you can achieve this also by purely interpolating the transform, using sinc after transform or zero-padding before the transform.
Take note that doing this well is actually extremely complex. If anyone could do good polyphonic pitch detection, Melodyne would not be alone in the world.
Btw, my FFT doing a 2 * 65536 samples transform for double precision, including all the processing I do, takes the equivalent of 1.6 ms. What FFT are you using, and did you measure this on a release build?
It’s some random FFT I found written for C++, but it sounds like I need to find a different one. I didn’t actually measure it in Juce though, just as a standalone C++ file on my macbook pro.