`getNextAudioBlock()` VS `timerCallback()` - Multi thread in C++


As I understand normally getNextAudioBlock is called (or just happen in some way) every bufferSize/sampleRate second. But I wonder what happens when I call some function (let’s say myFunction) in getNextAudioBlock. Does application wait until all procedures in myFunction are executed before next getNextAudioBlock, or everything happens parallel (simultaneously)? I’ve always thought it works in series, one by one.

But I’ve found out there is Timer class. So I started to wondering and analyzing all of that. And I think no matter where I call startTimer(someTime) it should call timerCallBack() every someTime. So to make a sense it should be working parallel to getNextAudioBlock which has different timing than someTime. Am I right?

And I wonder is there any way to make parallel processing without Timer class? I want to call myFunction (for example DFT or some slow FFT) in the getNextAudioBlock, and be sure that if execution of myFunction takes more time than bufferSize/sampleRate, next getNextAudioBlock will be executed without any obstacles. But wait…!!! In the next getNextAudioBlock there is again called myFunction so there need to be some lags. Am I right? So I can consider to use some trigger that not allow to execute myFunction until the first call is finished. But still need to be sure all next getNextAudioBlock need to be executed without any obstacles. How to be sure of that? I thing it’s possible only by ensuring myFunction will be executed parallel. But how to do that?

I am even not sure if all of that what I am talking about has any sense.
How to deal with that kind of doubts.

I am still learning about C++ and programming at all. And few days ago one friend told me everything in programming (each language) happen linear, he mean all procedures one by one. I think he is wrong, but I am not sure why, and even not sure if I am right :slight_smile:
Please help me.


Yes, within a single thread everything executes linearly. But of course already in a very simple application where audio is involved, you will have at least 2 threads, the GUI/main thread and the audio thread. So the execution orders of different pieces of code are no longer so straightforward. You will “just need to know” by convention what code runs in what thread and if you are not sure, you will need to figure it out by debugging or logging.

I don’t have good advice for your actual problem, it can get very complicated to ensure the audio thread’s execution is not blocked for a long time. One possibility is to use additional threads to process things in parallel, but yeah, it’s complicated…You can’t for example do any GUI stuff from other threads.


OK, but all of that it’s quite easy for me to deal with in the Juce “Audio Plug-in” project template.
There are separated classes PluginEditor and PluginProcessor. And I can put everything to the PluginEditor, and it has it’s own member (initialized in constructor) of AudioProcessor, everything in PluginEditor can use that AudioProcessor member. That is great.

But in the Juce “Audio Application” project template there is no one class that has initialized any AudioProcessor. So I see no option to use it outside of getNextAudioBlock. It confuses me very much. How to deal with that?


The classes as such have nothing to do with threads. Things run in the thread that calls them. (By convention the PluginEditor stuff runs in the GUI thread and some stuff in the PluginProcessor runs in the GUI thread or the audio thread.)

In the Audio app template things run in the GUI thread and the audio thread. getNextAudioBlock is the function that is called from the audio thread. So any code that runs in the getNextAudioBlock function gets run in the audio thread. The rest of the functions are called from the GUI thread. (Normal Timers are also always run from the GUI thread.)


OK but to be sure: startTimer(someTime) provide parallel processing or not?
What happen if I startTimer() for myFunction in the constructor of MainComponent. And in timerCallback() I call the FFT function with the input data which I take from getNextAudioBlock.
For simply calculations let’s take:,
Timer is set to 1 second, buffer time is 441, and sample rate is 44100. So in each second the getNextAudioBlock is called 44100/441 = 100 times. So which audio block will be used as input data in the FFT (in timerCallback()) . First of those 100 times, or last one? Or maybe somewhere in the middle?


And will it be (FFT) calculated parallel to the getNextAudioBlock, or not parallel?
What happen if the values are not rounded, fo example timer is set to 1 sec, but buffer is 512, and sample rate 44100, so then there will be 512/44100 = 86.13281 times of AudioBlock, so then which part will be used in the timerCallback()?


The audio processing and the Timers running in the GUI thread are completely unsynchronized. (So they do run in parallel and there’s no straightforward way to keep things in sync. Also the GUI timers are not very precise, the callbacks happen at approximately at the interval you specify in startTimer.)


Thanks for reply. But can give the good answer for my last part of that thread? I mean that:

What happen if the values are not rounded, fo example timer is set to 1 sec, but buffer is 512, and sample rate 44100, so then there will be 512/44100 = 86.13281 times of AudioBlock, so then which part (which audio block) will be used in the timerCallback()?


Those calculations are useless, the Timers don’t run at exactly the specified rate. You will need to ensure yourself in some manner the GUI thread accesses the right audio samples for the analysis etc calculations.


So how to ensure that? :slight_smile: I was sure that it is ensured by creating reference to AudioBuffer


Again : do NOT store any references or pointers to the AudioBuffer or the data in it. It’s not going to work. You will have to do some kind of circular buffer or something where you push (copy) the audio samples from the audio thread. Then the GUI thread can use those copied samples from the circular buffer.


OK, but somebody should know how to do that. Don’t you think it’s quite important in the audio processing. So where to ask? A lot of programmers creates audio applications, so it can’t be so difficult, am I wrong?


Many people work at the NASA, still it is not really easy to fly to the moon… :wink: SCNR

What you do, is create a FIFO and pump your audio from the processBlock (or getNextAudioBlock) and pump it to your analysis class. From that point you have “all the time in the world” to compute your FFT, also wait until enough samples are available etc.

For that the analysis class has it’s own AudioBuffer, big enough to add constantly audio…



You are right with the NASA comparison.
By the way. I am not sure what do you mean by FIFO. But I am working on simply application that could show possibility of my various implementations of FFT, and compare them. I am going to give options to change buffer size of input to FFT, measure time consuming by calculations for each type of FFT, changing matrix divider in mixed radix FFT, and even compute regular DFT to comparison. So with DFT I don’t think I have “all the time in the world” :slight_smile: But maybe I am wrong, because I don’t know what i FIFO, and there is explanation? :slight_smile:


FIFO has nothing to do with Dsp stuff. Its an acronym for First In First Out: an algorithm for organising data.