I’m in the process of tidying up some FFT classes I did a while ago using vDSP on the Mac (cheers aaronleese for spurring me into action on this) and I thought I would take the opportunity to ask for a bit of general advise/open up a bit of a discussion on the topics surrounding the issue. All feedback is welcome and please feel free to share any experiences you might have had.
To test the fft classes I’ve created a spectroscope component that uses a background thread to do its rendering. I was wondering what models people use for this kind of thing. At the moment I have my audio callback pushing samples into the spectroscope’s AbstractFifo buffer on the audio thread. The internal spectroscope thread then loops through the buffer and processes the fft in blocks. It then finds the magnitudes of the fft and updates an internal buffer if they are bigger.
Now the message thread takes over using the Spectroscope’s Timer. This uses the internal buffer to draw the scope lines onto an Image and signal a repaint. Then the internal buffer is multiplied by 0.7 to make the peaks fall. All I need to do in my paint callback is then draw the image.
Now this seems to work pretty well, the scope looks as expected and from my initial thoughts this has a few advantages…
- The audio thread is blocked for very little time, all it does is copy its sample block to a fifo and set a flag.
- Because the magnitudes buffer is updated after every fft operation and only updated if each bin has a higher amplitude the scope always show the most up-t0-date, accurate information, no audio frames are lost.
- Most of the heavy work (i.e. the fft processing) is kept off the audio and message threads.
- Any audio buffer/fft block size differences are dealt with easily by the fifo.
And a few disadvantages:
- There is a lot of copying of audio blocks around e.g from the audio callback to the fifo buffer, then from the fifo buffer to a temp block, from the temp block to the processed complex fft data, from this to the magnitudes buffer. This happens for every audio block, thats a lot of copying which could probably be cut down but would then mean spending more time in the audio thread processing fft data.
- The scope’s line generation is performed on the message thread. Whilst this seems to work ok I don’t really like performing potentially long drawing ops on the message thread. This could be moved to the fft processing thread or possibly its own thread, is this getting a bit carried away? 4 threads to draw one element?
I’ve got into the habit when drawing any complex audio data (waveforms, scopes etc.) to render in sections on a background thread then shuffle a cached image along internally and append the new data to the end. This has made my rendering times MUCH better than they used to be. However is this really the best approach here as all I basically need to do is update a path and draw it in the paint method. I think I might be adding extra work by drawing a whole image each time.
Graphics::drawLine or Path::lineTo?
Following on from the above, is is faster to draw lines using the graphics ops or create a Path (even if it has a few thousand points) and stroke it in one go? I guess it depends on how the path is iterated and if its drawing ops are combined.
Anyway, some things to think about, all thoughts welcome.