Greetings everyone! I’m currently trying to implement a non-uniform partitioned convolution algorithm and Iong story short, I need to split a simple process into multiple parallel processes that are given different times to complete. I would, for example, like to be able to create two (or more) processes A and B such that B takes in twice the samples that A does and:
- A is given time equivalent to one buffer to complete.
- B is given time equivalent to two buffers to complete.
So the scheduling would look something like:
Step 1. A outputs audio.
Step 2. A and B output audio.
Cramming step 2 into a single buffer is probably the easier thing to do, but doing this results in a bottleneck that could be pretty bad when trying to implement a bigger number of processes. So, is there any way to achieve this sort of queuing in JUCE?
You absolutely can. It sounds like you want a child thread, not process. You can use juce::Thread or juce::ThreadPool.
The way you talk about these jobs “being given a certain amount of time to complete” seems like a misconception – you can only start a worker thread and see when it completes, or kill it if it takes too long. You can’t really tell the OS “do this task in this amount of time”.
Also, beware multithreading with your realtime audio code. If the worker thread is doing something like loading IRs, that’s fine, but if you’re expecting to take a certain portion of the work that needs to happen in the audio callback and use another thread to do that, it will not work and you will end up with worse performance than you had before.
Thanks for the clarification. This is a tricky subject for me to wrap my head around because from what I’ve read, non-uniform partitioned convolution can be implemented as several uniform partitioned convolution algorithms running in parallel with different buffer sizes.
In principle the parallel algorithms run with different portions of the IR, so what I meant by ‘given a certain time to complete’ was that the other algorithms have, say, 2 buffers to receive the information they need and 2 buffers to perform some calculations that are independent of what the other algorithm is doing. I have the vague notion that this should be different from -say- simply attempting to distribute a single workload between two threads.
It seems the B section you mentioned, could be considered a “background thread.”
If so here is an open source convolution that uses direct convolution in a smaller buffer or “head” and a FFT convolution for the larger or “tail” computations. There is a reverb the author also made with the class.
Hope this helps.