Passing objects from processor to multiple editors

Hi all, I've been spending my holidays watching cricket and thinking about exchanging data between threads. What I'm struggling with is is that lock-free queues are good for single producer/single consumer scenarios, yet when communicating from AudioProcessor to AudioProcessorEditor we need to cater for the possibility that there may be multiple editors. The following options occur to me:

  1. Only communicate POD values atomically - but I'd like to be able to deliver FFT data for example
  2. Don't worry about data tearing (e.g. don't worry if processor updates FFT data array mid-way through GUI draw)
  3. Only allow one editor to read from the queue - problem with this is you don't necessarily know which editor is going to be shown by the host
  4. Design a single producer/multi consumer FIFO
  5. Lock-free FIFO to transmit to a dispatcher (aka relay) on the message thread, which in turn makes data available to multiple editors (can use locks here)

The fourth one would be ideal, but I reckon it's not straightforwardSo I'm wondering what Timur, Jules (or anyone else) can suggest?

 

Edited to add option 5

if you are looking for a multi-producer/multi-consumer lock-free FIFO: I recently had a good experience with this one (although only using it in single consumer scenario currently)

http://moodycamel.com/blog/2014/a-fast-general-purpose-lock-free-queue-for-c++

but I guess what you really want is a lock-free Observer (pub/sub) pattern?  A multi-consumer FIFO is "first come first served" afaik, so only one consumer would end up getting the message. 

So why not use a single-producer/consumer FIFO instead for transfering data from the producer thread to a single consumer on the message thread, then let that "dispatcher" consumer buffer the state and just broadcast change events to the various editors the regular way? All of which also run on the messenger/UI thread so they can safely access the state?  Unless I am overlooking something...

Thanks for the link Leo, that's an interesting approach - though as you say it doesn't look like it'll do what I want with multiple consumers of the same data. I guess what I'm after is something that only completes the pop when all consumers have completed their read.

Your idea of the dispatcher on the message thread did occur to me, but I neglected to include it in the list of options. The advantage here is that the the dispatch system can use locks. I'll add this option to the original post and leave it hanging in case the JUCE team respond in future.

Timur, can you comment on the OP now that you're back?

Hi Andrew,

Single producer/multi consumer queues are something I have to admit I never actually implemented on my own... so I'm not sure whether I can be of any help.

If you are looking for an implementation, Boost.Lockfree has a very good one (boost::lockfree::queue). However as mucoder already said, a multiconsumer queue means that each item only ends up at one consumer - after it's popped by one consumer, it's popped.

You are saying that this is not what you need - you need the items to arrive at all consumers. So it sounds to me like actually you need two structures here to separate the lockfree queue from the dispatch: this would be a single-producer/single-consumer queue which does the popping, and then the single consumer would be some object that distributes the item to all listeners and keeps track of that.

I can't really comment on how this would work in conjunction with the JUCE classes you mention, this is pretty much uncharted territory for me... so there may be a better solution.

Thanks for responding Timur. I like the idea of the two stage approach best, so I'll work on that for a while.

couldn't you just create N single producer/single consumer queues and push the items into each of them?

Also take note of https://github.com/cameron314/concurrentqueue (single header file).

Best,
Ben

Yeah, but in this case N is unknown so it's not so elegant. You don't want to be adding new queues during processing because of the memory allocations. So you'd have to guess you'd never need more than M and hope that N<=M. Practically speaking, you might get away with that most of the time if M is chosen well.

Thanks for the link, I'll take a look at that tomorrow.