Wish me luck articulating this…
There might be a reason to attempt to avoid the normal plugin real time processing in a given plugin - maybe it uses high CPU, or maybe it has high latency. So, if there were a way to avoid the processing overhead but still retain the benefit of the processed audio, then that seems like something that would be useful in some situations. In most cases, the settings of the plugin do not change between playbacks, so if the plugin could simply “cache” it’s processed output and output those samples until some UI setting is modified (which would invalidate the cache), then it seems like that would be very useful for these plugins. (side note: if the plugin has some sort of random number generation, then it cannot do this without losing the randomness, so exclude those plugins for this discussion).
In order for this to work, the plugin needs to know exactly which sample is arriving at processBlock. Instead of “here are 512 samples”, it would need “here are samples 1000 through 1512”, as measured from the host session timeline start or some other reference point. If those processed samples already exist in it’s locally stored memory cache, then it simply returns those…otherwise it has to do it’s processing and add them to the cache. It is sort of like an “auto-rendering” functionality. It seems very useful, but I am concerned why it does not already exist (or maybe it does). If it works, it seems like it could one day be taken care of in the JUCE layer, for the benefit of all.
If I understand the CurrentPositionInfo correctly, the timeInSamples is an absolute value from the beginning of the host audio session. Can this be used in conjunction with processBlock to accomplish what I am talking about?