I’m relatively new to audio coding, and was looking for some clarification about how to correctly implement any kind of audio visualizer.
Currently, this is how I am doing it:
Step 1: Audio Output
I’m building a basic
AudioAppComponent (not plugin, just stand-alone application).
I have a synthesized
AudioSource that is being played by an
AudioSourcePlayer has been added to the
DeviceManager as an audio callback to output audio.
So basically, the synthesized sound is getting sent through the application’s main audio callback to the audio output.
Now the next step is to introduce the visualisers to the chain.
Step 2: Visualisers get Access to Audio Output Callback
I’ve created a class to encapsulate multiple audio visualiser objects into a single object called “Visualisers”. This class is an
AudioIODeviceCallback, giving it the ability to access the same audio that is being sent to the speaker output.
An instance of the “Visualisers” object is added to my application. This “Visualisers” object is added to the
DeviceManager as an audio callback to receive any audio that is sent to the output.
Now the “Visualisers” have access to a callback of audio data. When this audio callback is called, the audio output data is saved into storage buffers for each individual visualiser object within the encapsulating “Visualisers” object.
The reason I created this Visualisers grouping class was because I thought it would be more efficient to have only one audio
AudioIODeviceCallback for multiple visualiser objects to receive at once, instead of a
AudioIODeviceCallback for each individual visualiser object. I do not know if this is actually more efficient.
Step 3: Individual Visualiser’s Process Audio Data & Update Display
This is where I feel the most unsure about what I am doing.
When thinking about visualiser implementations, my first thought is that I should NOT use any locks or JUCE
CriticalSections. I watched an audio talk where @timur talked about how all audio programming should be lock-free.
So I think I need some kind of lock-free circular ring buffer that the audio callback should copy samples into, and then a GUI thread should read from.
This is where I feel the most confused. Should I be using a
Timer to constantly update GUI data and call
I’ve looked at some open source visualiser examples where someone used a
TimeSliceThread to update data structures used by the GUI, and then used a
Timer to call
Should I even be trying to mess with threads for visualisers?
If anyone can confirm or deny any of the steps I am taking, that would be very helpful.