Debugging Audio Glitches in getNextAudioBlock

Hi All,

New to the forum so if this is in the wrong place let me know!

I’m creating a standalone audio application for macOS in xcode.

The problem I’m getting is that occasionally I get audio glitches in the output audio stream - after spending a lot of time debugging it looks as if my ‘getNextAudioBlock()’ method is executing later than it should causing audio drop outs. I timed the execution of this method using kDebug signposts and the system trace instrument in Xcode.

A screenshot of the ‘glitch area’ is below, to the left you can see the regular calls to getNextAudioBlock() that do not produce glitches, then in the centre these get further apart due to an unusually long block from mach_msg_trap(). Is there an easy way to find out what is causing this block?

I was not sure how much info you need from the code, so if you have any ideas about what would be useful to see then please let me know.

Many Thanks,
Jack

Are you doing something in getNextAudioBlock you should not be doing? Locking mutexes for too long, allocating memory, reading/writing from/to disk, posting messages to other threads using some potentially expensive mechanism? If it’s none of that, is your audio DSP code fast enough?

Yes - I think it’s probably something to do with locking or memory allocation, though I’m not sure how to get around it. I’ll give some more context to answer this:

The program is a granular synth so I have two main threads running (along with the GUI etc), a scheduler thread which manages creation and deletion of grains in a stack (these grains contain purely metadata, no audio data - so length, which amplitude envelope they have etc) and the audio thread which in getNextAudioBlock, iterates through the stack of grains and pulls the current sample each grain is at from a ReferenceCountedBuffer, multiplies it by the amplitude envelope and then puts it in the output buffer.

Obviously there has to be some communication between these threads - the scheduler needs to write to the stack and the audio thread read from it. The only way I could get this to work is by using ‘Array<Grain, CriticalSection>’ to store the stack, presumably introducing some kind of lock. If I just have ‘Array<‘Grain’>’ then it will run for a while before crashing with a Heap Corruption detected, free list damaged error.

What would be the correct (best?) technique to sharing this grain stack between the two threads?

This is also my first JUCE project, so I may be doing MANY things incorrectly.

Scheduler Thread run() Code:

Audio Thread Code:

If all you do is fetching a grain and resample it (stretch), then it should be fast enough to run on the audio thread and get rid of the background threads.

My granular source has an AudioBuffer of the length of the created grain, and it is calling from the original sound an excerpt. An analysis beforehand tells you, by what factor you have to stretch the grain to get the desired length.

LagrangeInterpolator interpol;
interpol.process (factor, originalSignal.getReadPointer (0, start), grain.getWritePointer (0), grainSize);

After that I add that into a FIFO with crossfades, since the grain size is not necessarily the buffer size.

I would personally drop using the additional thread for the grain properties generation. But that’s just me, maybe you have some good reason to have that. Generally, the less threads need to be involved, the better, because the problem will always be how to communicate thread safely and efficiently between them.

Thanks both for your quick responses, I may well try dropping the scheduler thread and integrating it into the audio thread since it is actually quite low on the computational requirements.

The scheduler makes the program easier to understand conceptually, but programming it is proving difficult!

Sure, it would be awesome if things could be easily made to work like that. Coroutines would be quite helpful, but support in C++ is in too early stages now…