Channel setting change in real time?

Hi, sorry if in these days i’m asking lot of questions, as always i’m working in the DspModulePluginDemo, my question of the day is, can i change the channel setting in real time for example in the processBlock function? i have the project with:


.withInput ("Input", AudioChannelSet::stereo(), true)

.withOutput ("Output", AudioChannelSet::stereo(), true)),

so i have a stereo configuration Left Right, now if in the processBlock i want to use Mono configuration

auto firstChan = block.getSingleChannelBlock (0);
<process (dsp::ProcessContextReplacing<float> (firstChan));

i have only the left channel active, now with this code i copy again the content from block(0) to block(1)

for (size_t chan = 1; chan < block.getNumChannels(); ++chan) block.getSingleChannelBlock (chan).copy (firstChan);

and i have again a Left Right configuration, but what i would need is 2 center channels, pratically a Dual mono configuration, so i can decide to process in a real single center Mono, and if needed i would like also the possibility of Dual mono, so both channel in the center and not Left Right, so is it possible to change the configuration in real time only when needed? and if yes how could i obtain a dual mono? thx so much as always for your time and forgive me if what i ask is as a novice inexperienced :slight_smile:

Hi @soundzark, those layouts (AudioChannelSet::stereo()) in the constructor are only the default layouts of your plug-in. The host may choose to change the layout to anything else between prepareToPlay callbacks. In fact, most DAWs won’t even look at the default layout that you use in the constructor.

Unfortunately, no plug-in can change their own channel layout - this is always up to the host. The host has complete control over this. The only thing you can do, is tell the host which layouts you support (for example, you can tell the host that you only support mono and quadrophonic layouts). You do this by overriding the AudioProcessor::isBusesLayoutSupported callback. On initialisation the host will call this callback several times to probe which channel layouts you support (you return true if you support the layout and false otherwise). The host will then choose one of the supported layouts and then calls prepareToPlay. In the prepareToPlay call you can then inspect which layout you actually got by calling AudioProcessor::getChannelLayoutOfBus.

For dual mono you probably want to let the host know that you support two mono audio buses by returning true in the AudioProcessor::isBusesLayoutSupported callback when the supplied layout corresponds to two mono audio buses.

Edit: removed the word “dynamic” as this was confusing


I was confused about this for a long time as well after the new bus stuff was added, and ended up having to write a bunch of test apps to understand how it worked in a plugin setting vs. a free-floating AudioProcessor instance (where buses can be dynamically added, right?).

It would be nice if the documentation was beefed up to include some simple “what will likely happen in the real world” information like your post. The dynamic bus stuff mentions “…for hosts that support dynamic buses…”, which can make handling lots of corner cases really intimidating when in reality almost no hosts (any hosts?) support this. Something similar to the extensive (but compact) block of useful information for the processBlock method.

Thx so much for clarifying this, now i’m starting to have a little better understanding day after day on how things work, i tried to set a LCR so that i could use Left Right Center and eventually in the mono processing use only the center channel, but still didn’t solve my problem because even if in the plugin i got 3 input and 3 output, my center output had to go or on the left or on the Right, so the sound is still or in the left speaker or in the right, i don’t know if in the DAW would have a different behaviour, i guess i need to do test and more test to have a better understanding, but for right now that clarify a lot to me, so thx so much and hope you will not get upset if by the next days i will be asking more stupid question :smiley:

Hi @jonathonracz,

Thank you for your suggestion to add more documentation to the AudioProcessor class. I’ll try to come up with something and post it here before pushing it to develop.

I don’t quite understand what you mean with testing your AudioProcessor in a plug-in setting vs. a free-floating instance - there really shouldn’t be any difference between the two settings. Are you talking about the difference between hosting an AudioProcessor vs. implementing one (i.e. deriving from AudioProcessor). After all, the plug-in wrapper code is just some code which hosts your AudioProcessor.

An in deed, what’s super confusing about the AudioProcessor class (and this does not only apply to the multi-bus stuff) is that there are big, big differences in which methods you should override or are allowed to call when
(1) hosting an AudioProcessor (either via the plug-in wrappers or your own hosting code)
(2) implementing your own AudioProcessor (this is what most plug-in developers do) vs. when
(3) calling through to the base-class AudioProcessor methods inside your own AudioProcessor
The differences in these three situations isn’t clearly documented anywhere. This difference in usage is specifically problematic for the multi-bus stuff.

But let me start off with some non-multibus examples so that you understand what I mean.

  1. When hosting an AudioProcessor (1) you would typically call setParameter to change a parameter. When implementing an AudioProcessor (2) you typically override the same method, but if you want to change a parameter from inside your own AudioProcessor you need to call setParameterNotifyingHost (3).
  2. When hosting (1) an AudioProcessor, you are discouraged from calling createEditor directly. You should rather call createEditorIfNeeded as it includes certain safe-guards etc. However, when you implement an AudioProcessor (2) you need to override createEditor. Additionally, you would never call createEditorIfNeeded or createEditor from inside your own AudioProcessor (3).
  3. When implementing an AudioProcessor (2) you obviously override the processBlock callback and do your processing inside of that callback. But when hosting an AudioProcessor (1) it’s not quite so simple. First you need to be sure to lock the audio callback (AudioProcessor::getCallbackLock), then bundle all bus buffers into a single big buffer and lastly call processBlock/processBlockBypassed depending on if your AudioProcessor should be bypassed or not. We should probably add a method to AudioProcessor which does all of this for you, maybe call it render or something. This render function would be intended to be only called when you host an AudioProcessor (1). And again, you would never call render yourself from inside your own AudioProcessor (3).
  4. Many AudioProcessor methods (such as getSampleRate()) can be used safely when hosting an AudioProcessor (1) but also inside your own AudioProcessor (3).

OK, now let me talk about the multi-bus stuff - because that’s where the distinction between (1), (2) and (3) is really important (and confusing). Let me just start with setting and getting buses layouts:

The most important method to query the current and potential buses layouts are the AudioProcessor::getBusesLayout and AudioProcessor::checkBusesLayoutSupported methods. Similar to the getSampleRate() method, these methods can be called when you are hosting an AudioProcessor (1) or from within your own AudioProcessor (3). A bunch of other methods (getChannelLayoutOfBus, getChannelCountOfBus, getTotalNumInputChannels, getMainBusNumInputChannels, supportedLayoutWithChannels, getMaxSupportedChannels…) are really just simple convenient wrappers around these two methods so can be called in the same situations.

To change the layout of an AudioProcessor you should call setBusesLayout but this method should only be called by the code which is hosting your AudioProcessor (1). You should not override this method nor should you ever call this method from within your own AudioProcessor (3). As above, there are many convenience wrappers (setChannelLayoutOfBus, enableAllBuses, setCurrentLayout, setNumberOfChannels, …) around this method which are only relevant when you are hosting an AudioProcessor. The most common multi-bus API mistake I see these days is that someone is calling one of these methods inside their own AudioProcessor.

And then there is the callback isBusesLayoutSupported which should only be overridden in your own AudioProcessor (2). It should never be called when hosting an AudioProcessor (1) nor should it be called within your own AudioProcessor (3).

To add and remove buses (what we call dynamic layouts), there are similar restrictions. addBus/removeBus are only intended to be used by (1), getBusCount can be used in (1) and (3) and canAddBus/canRemoveBus is for (2). However, as you point out, only AU hosts make use of this feature. In particular, Logic uses this for multi-out synths.

When we wrote the API, it would have been great if we could have enforced these restrictions. However, the only way to do this is to split the AudioProcessor class into an AudioProcessorCallbacks class (2) with a bunch of virtual methods and an AudioProcessor class which is marked final (1) and some sort of AudioProcessor::State class (1), (3) which is passed to the AudioProcessorCallbacks class and is used by the AudioProcessor class. The State class would include things like getSampleRate and getBusesLayout. This class basically stores the member variables which are currently in the AudioProcessor class.

However, such a change would have radically broken all our user’s code in a non-trivial way. So that’s why we have the current situation.


Sorry, I phrased that in a weird way - what I meant was understanding what’s expected/allowed to be going on when designing an AudioProcessor subclass for being used in a VST/AU/AAX/etc. plugin vs. designing one where you’re also hosting it and may have extra control over things like changing bus layouts from within the processor (which I now understand you’re not supposed to do anyway). Hopefully that makes more sense.

The information your post is really handy. A similar summary with brief descriptions mentioning “the host will be calling this, don’t call it yourself” or "this is safe to be called within <some internal method(s)> as long as they’re on " in the documentation would be really helpful.