I want to perform some expensive computations on the content of a given audio block, which I know will be too expensive to apply to the same audio block. Is there a way in which I can apply the result of computing on audio block n
to some block < n
in the future?
For context: I wish to compute a Parabolic Equation relating to effects of natural terrain(hills etc), this involves multiple matrix multiplications and additions for every 10th of a wave length during the sound propagation distance (propagation distance is a parameter the user can set), naturally this is too much to compute for each processBlock() call in my plugin, my current plan is to precompute the values so they can be used in the plugin, but any information on the question would be very much appreciated, all best
that sounds exciting! but lookahead features are not able to pull off cpu heavy tasks. when it already took too long to finish this block, you’ll be even more behind next block
1 Like
You can use a FIFO on a separate thread for the heavy computations. That how usually heavy FFT operations are done. It might introduce latency, but the performance benefits are huge
Is the fifo solution you have in mind similar to the example code for drawing the frequency spectra (minus the actual drawing part in this case obviously) or is it a different solution? Surely the time it takes to fill a fifo and compute the forward transform, then do my computations still has the same overall time constraint in the sense that the audio needs to be outputted before the next incoming block, currently i declare an new audioBuffer of * 4 the numSamples of the buffer in prepareToPlay.
How does using a fifo add extra benefits to this? many thanks for your response 
P.S I know that implementing multiple threads on the audio Thread is a no-no, so how could using a fifo on the same thread offer benfits, if the next audio buffer still is incoming to the process block at a given time in the future
Apologies if I am mis understanding something here, please elaborate 
Thank you for your response 
I have a vast set of parameters so pre computing all options would result in many huge hashMaps or similar structures (I have not considered the best data type for this yet), but theoriectially I could compute the outcome for every frequency in the human audible range, for every parameter at every slider value increment…
Think I may need to make some simplifying assumptions or set some limits here!