Share data between processor and editor

I need to be able to graphically represent the results of a FFT, and in the future manipulate the freq domain signal and apply IFFT and graphically represent that as well.

I am having problem getting the fftData variable from the processor to the editor. Some form of static class might work but i have no idea how the JUCE platform would go about letting me do that.

If you’re trying to do this in a realtime context, just be aware its absurdly difficult to do and get right.

The way I’ve solved it in the past was to use a bidirectional pair of lock-free queues to pass signal snapshots from the processor to the editor, then recycle the allocated buffers by sending them back from the editor to the processor to be re-used again. The buffers were all ref-counted objects so that if the buffer size/sample rate changes I can just abandon the old size/sample rate buffers and allocate a new set.

See my implementation here:

Audio thread side snapshotter which sends periodic snapshots of the audio signal to the message thread via a rolling buffer + a lock free queue full of buffers (snapshots) sending to the message thread, and a return lock-free queue for recycling the snapshot buffers to avoid allocation:

Message thread consumer, note it pulls raw signal snapshots and does the FFT calculations in the message thread. I did this to reduce calculation load on the audio thread, especially when the editor isn’t loaded:

Note that my code here is GPL, and probably won’t work outside the context of my app (aka a direct copy most likely won’t work for you). But the ideas are there.

That seems so crazy to me…i now understand why all the examples have one h and one cpp…this structure makes no sense…in almost every reason to make a vst i can think of, the gui has to be able to talk to the processing unit.

I use a similar method. However sometimes the GUI is slow or heavily loaded, and in this situation sending too much data only makes it slower. In my first implementation, I just kept cramming snapshots onto the que, but this only resulted in the GUI becoming overloaded, sluggish and unresponsive. So in my latest implementation, the most recent signal “snapshot” is held by the processor until the lock-free que is free. This may result in the snapshot sometimes being overwritten when the GUI is slow to respond so that e.g. only every second snapshot makes it to the GUI for display. But that’s a good thing becuase it provides a form of bandwidth limiting whereby the GUI only receives snapshots as often as it can handle. This results in smoother, more responsive GUIs overall.

If you know the size of the snapshot in advance, or are prepared to allocate a que big enough to hold one or two snapshots, there’s no need for reference-counted objects, nor for two ques because it’s easy enough to query the Audio-to-GUI que to see if it has been serviced or not. i.e. if there is enough spare capacity to shove a fresh snapshot into it. Hope that makes sense.