Audio Processor explanation

Hello everyone,
I have some questions regarding audio processor class and some of its methods.
When is releaseResources method called or reset method? I can see their description when it is called but I have tried to check if and when it is called by making a public bool attribute in processor class with false value,and setting it to true value in releaseResources method or reset method,and I have a timer callback in editor class to check every 40 ms if that bool in processor is true and if it is then it sets a toggle button on to let me know, but that button never get activated.
Am I missing something here?

Well, if they’re not being called, that’s because the host hasn’t chosen to call them, presumably because it’s still playing your plugin.

And a host may never call reset() at all. I’m sure many of them don’t.

Okay,thanks,that explained some things to me.
What about processBlock method? When I set my channel config to be {2,2} everything works well,but when I set it {0,2}( which I want it to be,as I don’t want any audio input just to generate audio output on midi messages), there is no audio playback. Why would that thing occur? I don’t have any for loops with num of inputs as condition.

Edit: I have figured out solution for getting audio playback with {0,2} config.Still wondering when is processBlock called?

The host repeatedly calls it to get new blocks of audio. If the host is working correctly, the first processBlock call happens after one or more calls to prepareToPlay have happened.

1 Like

Thanks a lot, I get it now. :slight_smile:

Well, if they’re not being called, that’s because the host hasn’t chosen to call them, presumably because it’s still playing your plugin.
And a host may never call reset() at all. I’m sure many of them don’t.

… which makes it useless at all. A method that might be called to indicate something or may not be called meaning the same thing is nuts. A framework should take care of a transparent behaviour. Or remove the method to clearly indicate to the user it needs to find out herself in what state the processor is.

The framework satisfies a protocol used by hosts to communicate to the plugin. You can’t really force the hosts to call every method available. Sure we all agree, it would be nice if all hosts would behave in a predictable way…

It could be worse: when writing an FxPlug plugin for FinalCutProX, I had to learn, that they save time by not calling my destructor, they just free the memory and done… (OT)
That’s fun…

I am not talking about forcing the host into a certain behavoir. This is about making sure the framework transparently behaves correctly even in odd hosts.
What is the programatically difficulty to make sure both methods are balanced around the processing?

I think it is called directly from the host. If the wrapper was to remove additional calls, which ones would it drop? Or if the wrapper would call it because the host omitted it, when would that be?

But that needs verification by somebody who knows the wrapper better than me, that’s out of my wheelhouse… cheers!

The framework would check if prepareToPlay() has exactly been called once before processing. It would further discard any additional prepareToPlay() if nothing has changed. If something changed it would balance it with releaseResources().
If prepareToPlay() wasn’t called before it would call it at first process call. The same with releaseResources() and the destructor.
In the end thats the mechanism anybody has to implement who needs something to be prepared/released depending on streaming setup (IO size, samplerate, # of channels, etc.).

…needless to say the same mechanics should be implemented for processorLayoutsChanged(), numChannelsChanged() etc. which are currently equally useful since they might be called, repeatedly called or not at all.

1 Like

What’s suggested by raketa in the post above sounds quite reasonable to me, what does the JUCE team think about it?

This detail wouldn’t work, as prepareToPlay MUST NOT come from the audio thread, but the processBlock HAS TO come from the audio thread…

apart from that I don’t have an opinion…

1 Like

For example with the releaseResources call, how do you suppose JUCE would know when the host has “stopped playing audio”? By measuring how much “silence” has been passed into the plugin, or measuring real time since the last call to processBlock or some other guess like that?

How would JUCE know when to call the AudioProcessor destructor if the host just doesn’t properly close the plugin via the plugin API?

It is ugly and should be used as a very last resort, only in the case prepareToPlay() hasn’t been called at all before processing, but… one could use MessageManager::callFunctionOnMessageThread() to invoke it maybe?

I think the destructor would be the right place where to check if releaseResources() was called after the last processBlock() and, if it hasn’t, call it from there.

Well ok, that’s plainly irrespective of any good practice and moral conduct, a framework couldn’t do much about that.

…in that case the audio thread already missed the train…

You have to add a workaround to not call processBlock until that method on the message thread has finished, which means, the plugin cannot be sure to get a continuous signal in the processBlocks…

If a host is not able let it’s plugins setup properly, then there is no rescue IMHO…

1 Like

Well if the host did not call prepareToPlay() there is something fishy anyways, so the option is to run the plugin misconfigured until the plugin managed to configure itself correctly or invoke prepareToPlay() from the first process call. Whats the worst case (for both options): noise or dropout (avoiding a crash of course).

But MessageManager::callFunctionOnMessageThread() already blocks until the call on the message thread has returned (or at least the doc says so), so the first processBlock() will be called only after the prepareToPlay() on the message thread has finished and returned.

I believe that would yield exactly the expected behavior from the plug-in point of view: the plug-in won’t ever know that its first processBlock() was “so close” to being called, but that it has been “put on hold” to make room for the prepareToPlay() on the message thread.

From its standpoint, the plug-in will see exactly the same sequence of events that would happen in the case of an host performing the correct procedure

The whole point having a dedicated audio thread is: not to block. If there is no other option either by invoking prepareToPlay() directly or waiting on another thread - the result is the same for the processing thread. The only difference would be if the processing does not wait (but silence its output) until prepareToPlay() terminates in the deferred thread.

This creates a drop out while the other solution could produce the wanted result - if the initialization does not take too long. I.e. in a offline situation it would be better anyways to call prepareToPlay() directly.

For OS X AudioUnits, isn’t there an article or Programming Guide provided by Apple that explicitly states what must be initialized to process audio without blocking the audio thread? Surely there is the same thing for Steinberg’s VST library, no? AAX probably has the same type of guide, no?

here’s apple’s guide for AUv3:

It doesn’t say anything about which thread it should run on, but this is the specific method to call for AUv3 “prepare to play”: