Best way to synchronize processors?

I'm working on developing a neuroscience software based on JUCE. I'm interested in the best way to code the following:

 

I have two processors that are working on my data looking for two different kinds of signals, "A" and "B". When Processor 1 detects start and end of signal "A" and it creates an event at these two times and adds it to the midibuffer. Processor 2 does the same for signal "B".

 

Whenever Processor 1 detects an "A", I want Processor 2 to know so that it can tag all signal "B" occurences with metadata specifying that they happened during an "A" event. All "B" events need to be tagged as to whether they happened during an "A" or not. Furthermore, I want to know the start and stop times of event "A" and correctly plot all "B" events that happened within that timeframe. So both Processors need to have some kind of common timestamp mechanism so I can plot all "B" events at the correct times relative to start and stop time of an event "A".

What's the best way for Processor 2 to find out from Processor 1 that an "A" event has begun or ended and be synchronized on timestamps?

Interesting.  Are your processors working on the same block at the same time? 

So the idea is that the data will be read in, and a "Splitter" module will be used to create two parallel signal chains. Both chains will then be bandpass filtered in different bands. This is because events of type "A" and "B" happen in different bands. After the BPF, I'll have the Processor 1 and Processor 2, respectively. I'll try to draw it out:

                                 ----> BPF (150-250 Hz)  -----> Processor 1

Data IN -> Splitter 

                                 ----> BPF (300 - 6000 Hz) -----> Processor 2

Well - you are processing audio in blocks.  So if you are running two separate processors, one will actually be happening after the other (if they are running on the same thread).  

So perhaps some reconciliation after the processors have finished? 

(Or you could move all the processing into a single processor unit, and do it sample by sample at which point it becomes easy ... although your processor is more complex to look at)

I see. I definitely need two separate processors because I might want to do only one kind of detection. Would it make sense to just use an unsigned long int as a sample index, ie both processors have one such variable to keep track of how many samples they've processed so far. And then use that as a kind of timestamp for event detection?

The solution I can see is to make the process talk through notifications handevent loops

In anyway somewhere you have a thread asking for a frame's data. In anyway you will have a delay of at least the side of the processing window (perhaps 64-2048 frames, depending on what you have chosen). What I can see is that you just need to have an event loop for the rendering process. Once something happen in what you call processor A or B you will add an event to the event loop of the rendering process. The rendering process prior to filling the rendering window with data, will look at the events it does have to consider. If some "processor" A or B sent events like addEventToNextFrame(positionOfTheEvent); then the rendering window will just add this event at the given position. Again in audio processing you are always working with a delay, it might be a one sample frame but this is still a delay (and most of the time uncommon). You can report the delay or not. What I mean is when an event will occur on your processor A (or B), you can either choose to add an event as soon as possible in the next rendering frame (setting the event offset to 0) or report the delay when the event occured in the current frame to the next frame which will be the rendering frame to render.

If your data are not events but audio samples, it's your responsibility to ensure the next buffer will be fed before the rendering comes. A way of doing this is to have a event loop into your processors A and B and to be notified from the rendering thread when it ended to render all the stuff. In this way your threads A and B are just filling a buffer and wait for an event "fill next buffer" to start doing the job again.

To make it short, each processor A and B (threadsA and threadB) start doing their job looking at the events to process into the eventLoop. If there is something like fill the next rendering (output) buffer, they will do so then post the result as en event to the rendering thread. The rendering buffer at the time it will have to render the frame will look at it's events. In those events, he might have the data content of what threadA and theadB did. Once done with the processing, the rendering process will notify the threadA and threadB by another event telling them okay I'm done, you can fill my next buffer.

 

In that manner : All the process can talk together and in the destination thread (most important), no risk of being locked by one. It is just a little complex to understand but I'm using this almost everyday.

Assuming they receive the same data in the same order that should be ok, and easy.