Hi @jonathonracz,
Thank you for your suggestion to add more documentation to the AudioProcessor
class. I’ll try to come up with something and post it here before pushing it to develop.
I don’t quite understand what you mean with testing your AudioProcessor
in a plug-in setting vs. a free-floating instance - there really shouldn’t be any difference between the two settings. Are you talking about the difference between hosting an AudioProcessor
vs. implementing one (i.e. deriving from AudioProcessor
). After all, the plug-in wrapper code is just some code which hosts your AudioProcessor
.
An in deed, what’s super confusing about the AudioProcessor
class (and this does not only apply to the multi-bus stuff) is that there are big, big differences in which methods you should override or are allowed to call when
(1) hosting an AudioProcessor
(either via the plug-in wrappers or your own hosting code)
(2) implementing your own AudioProcessor
(this is what most plug-in developers do) vs. when
(3) calling through to the base-class AudioProcessor
methods inside your own AudioProcessor
The differences in these three situations isn’t clearly documented anywhere. This difference in usage is specifically problematic for the multi-bus stuff.
But let me start off with some non-multibus examples so that you understand what I mean.
- When hosting an
AudioProcessor
(1) you would typically call setParameter
to change a parameter. When implementing an AudioProcessor
(2) you typically override the same method, but if you want to change a parameter from inside your own AudioProcessor
you need to call setParameterNotifyingHost
(3).
- When hosting (1) an AudioProcessor, you are discouraged from calling
createEditor
directly. You should rather call createEditorIfNeeded
as it includes certain safe-guards etc. However, when you implement an AudioProcessor (2) you need to override createEditor
. Additionally, you would never call createEditorIfNeeded
or createEditor
from inside your own AudioProcessor
(3).
- When implementing an
AudioProcessor
(2) you obviously override the processBlock
callback and do your processing inside of that callback. But when hosting an AudioProcessor
(1) it’s not quite so simple. First you need to be sure to lock the audio callback (AudioProcessor::getCallbackLock
), then bundle all bus buffers into a single big buffer and lastly call processBlock/processBlockBypassed depending on if your AudioProcessor should be bypassed or not. We should probably add a method to AudioProcessor
which does all of this for you, maybe call it render
or something. This render
function would be intended to be only called when you host an AudioProcessor
(1). And again, you would never call render
yourself from inside your own AudioProcessor
(3).
- Many
AudioProcessor
methods (such as getSampleRate()
) can be used safely when hosting an AudioProcessor
(1) but also inside your own AudioProcessor
(3).
OK, now let me talk about the multi-bus stuff - because that’s where the distinction between (1), (2) and (3) is really important (and confusing). Let me just start with setting and getting buses layouts:
The most important method to query the current and potential buses layouts are the AudioProcessor::getBusesLayout
and AudioProcessor::checkBusesLayoutSupported
methods. Similar to the getSampleRate()
method, these methods can be called when you are hosting an AudioProcessor (1) or from within your own AudioProcessor (3). A bunch of other methods (getChannelLayoutOfBus
, getChannelCountOfBus
, getTotalNumInputChannels
, getMainBusNumInputChannels
, supportedLayoutWithChannels
, getMaxSupportedChannels
…) are really just simple convenient wrappers around these two methods so can be called in the same situations.
To change the layout of an AudioProcessor
you should call setBusesLayout
but this method should only be called by the code which is hosting your AudioProcessor
(1). You should not override this method nor should you ever call this method from within your own AudioProcessor
(3). As above, there are many convenience wrappers (setChannelLayoutOfBus
, enableAllBuses
, setCurrentLayout
, setNumberOfChannels
, …) around this method which are only relevant when you are hosting an AudioProcessor
. The most common multi-bus API mistake I see these days is that someone is calling one of these methods inside their own AudioProcessor
.
And then there is the callback isBusesLayoutSupported
which should only be overridden in your own AudioProcessor
(2). It should never be called when hosting an AudioProcessor
(1) nor should it be called within your own AudioProcessor
(3).
To add and remove buses (what we call dynamic layouts), there are similar restrictions. addBus
/removeBus
are only intended to be used by (1), getBusCount
can be used in (1) and (3) and canAddBus
/canRemoveBus
is for (2). However, as you point out, only AU hosts make use of this feature. In particular, Logic uses this for multi-out synths.
When we wrote the API, it would have been great if we could have enforced these restrictions. However, the only way to do this is to split the AudioProcessor
class into an AudioProcessorCallbacks
class (2) with a bunch of virtual
methods and an AudioProcessor
class which is marked final
(1) and some sort of AudioProcessor::State
class (1), (3) which is passed to the AudioProcessorCallbacks
class and is used by the AudioProcessor
class. The State
class would include things like getSampleRate
and getBusesLayout
. This class basically stores the member variables which are currently in the AudioProcessor
class.
However, such a change would have radically broken all our user’s code in a non-trivial way. So that’s why we have the current situation.