Need help with a ring buffer test case


I haven’t used Juce much yet, but thought this would be a good place to ask about an audio programming project.

I wanted to tie in my interest in audio programming to a project at school and could use some advice in case any of my underlying assumptions are wrong, and how I could make more progress.

I’m trying to demonstrate thread synchronization issues in audio by demonstrating different implementations of a ring buffer in use in a producer-consumer situation.
My plan is to do this by demonstating the following things:
stalling the audio callback, by locking a ringbuffer to load audio, and then show the problem being solved with by using a lock-free implementation (probably portaudio’s).

I have most of the moving parts, using portaudio to play audio, and can play a preloaded wav file and have both my “naive” ringbuffer and portaudio’s ready to use, but I am a bit confused on what the right test situation would look like.

Mainly I am unsure if I have a correct plan to show the problem case in action.

I’d really appreciate if anyone could offer some general advice. I’ve been doing synth programming but not much with audio files, so I’m sure I could use a second oppinion about if my plan is feasable and how to tackle it.

Thanks very much!

This sounds like a really cool project.

What I would do is have a high-refresh-rate UI display some kind of data from the audio thread - an easy one is a stream of the current audio callback’s buffer. Use a shared lock between the audio and UI thread which allows the UI thread to block the audio thread and read the last callback buffer (or several callback buffers) and have the audio thread fight for the same lock to update the buffer for the UI thread in each audio callback.

In the lock free version, stream the incoming buffer to a fixed-size (non allocating) queue which can be read by the buffer during each display update.

Source: went from noob to non-noob in audio programming by naively implementing the first version when I first started years ago and getting horrible performance. :wink:

Also, if you’re looking for a lock-free queue, I highly recommend Cameron Desrochers’ SPSC queue. Single header, incredible performance, highly configurable, no dependencies.


Thanks, I think I’ll probably forgo a ui since I have limited time, or at least only use one to actuate some change with a button. My thought was to have a routine to switch the source wav going into the ring buffer, which would lock and allocate. What I’m having a hard time getting my head around is what would be a scenario to use a ringbuffer in between the source wav data and the audio callback. Since I could load and change buffers, I’m not sure of the use case for the ring buffer in between things…

I ended up using a routine to load a new wav to supply the ring buffer on demand, but even without using a mutex, my “naive” ring buffer works just the same as portaudio’s one with memory barriers. I was hoping to create a case that would cause a segfault or something, but I haven’t been able to get the naive buffer to fail…

IMHO these are the things, where you can get collisions:
(N.B. if I say write pointer, I mean the index in the buffer, not the float* sample…)

  1. advancing the write pointer and advancing the read pointer. If only your producer acts on the write pointer, and only the consumer acts on the read pointer, there cannot be any collision, so your numbers always point into the buffer. Hence there cannot be a segfault. Only if one of the pointers is being updated while accessing from the other instance, it will point to an arbitrary index, most likely pointing outside the buffer.

  2. In a situation like “give me the last n samples” the consumer reads the write pointer, so collisions can occur there.

  3. reading a sample that is being written: this will result in a nasty click in the audio, but if you don’t monitor that you might not even catch that. The debugger will not catch that at all.

For situation 2 I usually use an atomic index, that sorts that out, for situation 3 I use a buffer big enough and I only update the atomic pointer after the samples have been written.

N.B. if you have the same thread reading and writing, you cannot trigger the problems.

Maybe you can create these type of errors, it would be good to get an idea, how hazardous these situations actually are…


1 Like

Reading this was a major d’oh moment! I did have both the reading and writing code in the same thread, running in the audio callback. It’s no wonder it seemed to be working too perfectly.

I did manage to put the producer in a separate thread and created a synchronized version and two unsynchronized versions, with naive and lockfree buffers respectively.

I was able to demonstrate that waiting for a lock in the audio thread is bad, but didn’t actually get the naive buffer version to fail or perform differently. The unsynchronized versions in fact would occasionally crackle while the waiting version played perfectly. If the producer only does that one thing guess priority inversion would be only the risk there, but that’s all I can think of.

In hindsight I realized the situation jonathonracz mentioned would have been a better scenario to recreate. It’s actually not clear to me if it would ever make sense to send the audio data for playback into the audio thread from another thread through a buffer, after all if a wav is loaded the data is already there to be read.

So I didn’t see the exact narrative in the demonstration I came up with that I was trying for, but it was still really educational to see how threads and buffers react to different tests and see some multi-threaded code that a more complicated program might (or might not) use. Thanks everyone for your comments, it was really helpful! :slight_smile:


It is actally a quite common problem, although you are right to avoid it. New users are often too easy to seek their fortune there, it is bought off at a high price:

  • audio input to be processed in an analyser: the analyser doesn’t need to act on the audio thread
  • the audio is in a file format, that takes too much CPU to decode: BufferingAudioSource can be used
  • the audio comes from the network
  • sound is generated using several threads (makes no sense in the DAW, since the other cores are busy with other things) - there might be situations where it is useful, but most of the time it creates more problems than it solves

So there are use cases, but your research showed you, what you are dealing with…