Adapting step by step algorithm of audio processing into audio plugin

Hi there, I am new in JUCE.
I have the algorithm of audio processing, which use several passes of audio file processing. The first pass it accumulate and calculate parameters for further audio processing. Second pass it processing of audio file.
I whant use this algorithm in audio plugin (VST/AU, etc), but I have a several obstacles:

  1. Before first pass I should know all number of samples of input audio stream. Because I should prepare parameters of calculation in amount of “duration”, where: “int duration = kProcessParameterShift - 1 + (int)std::ceil((double)numFrames / kProcessParameterOffset);”, where “numFrames” all number of frames in audio file (input audio stream). I should use text fields in plugin UI, where user should set range of audio processing. So, how I can get all frames of input track (have JUCE a mechanism to get all frames from input track)?

  2. How I can do of several passes. In JUCE “processBlock” called only for block processing, I try to using offline processing, calling “setNonRealTime(true)” in “prepareToPlay”, but it is not give to effect, because I don’t saw differences in plugin audio processing. May be I don’t rightly tryied to create of offline plugin or may be I should use toggle in plugin UI with “first step” and “second step” or buffering all samples after first pass? So, have JUCE a mechamism for several passes on input track?

Thank you in advance for your advice.

Sounds like a difficult process to implement in an audio plugin. You would need to do two passes and have a switch in the plugin I think.

Maybe it’d work well in an AudioSuite Avid plugin or similar…or obviously you could do it as a stand alone executable and then you can process the audio however you like.

JUCE does not support the plugin extensions like ARA (for VST2/3) or offline AudioSuite for ProTools that would be required to do this directly. You can attempt doing a plugin that records the input audio to disk or a memory buffer and you could do your multipass processing on that recording. But it’s going to be fiddly for the users to use. That’s why things like ARA have been developed.

AudioProcessor::setNonRealTime is not a method you call yourself. The host may or may not call it when it is rendering offline. It doesn’t really have anything to do with what you are trying to do.


thanks for answer, but I already have application with this algorithm. I what to remake this in audio plugin.

“You can attempt doing a plugin that records the input audio to disk or a memory buffer and you could do your multipass processing on that recording.” this is interesting idea, but ARA is obstacles for me?

I was maybe unclear in my reply. I meant that since ARA is not supported with JUCE-based plugins your only other option is to make your plugin work so that it has to record the input in real time and once the user has stopped the recording, the plugin can do its analysis/processing steps.

Yes, I think too. When user stopped the record, I will be have full data for my parameters, after that I can indicate to plugin UI, that plugin finish learning and ready to processing. But I have stiil exist one problem about of “1)”.

About problem in 1 item. I think I should use the folowwing text fields in plugin UI: “start processing time” and “end processing time”. In processBlock I should analysing result of “getPlayHead()->CurrentPositionInfo(info)”. if “info.timeInSeconds” >= “end processing time” from GUI all further samples will be skipped.