In, out, Left, Right processing on an 'audio plugin' VST

A very noobie question here…
I went over the examples and tutorials and still have a basic issue which I can’t understand.
Maybe I can get it by an example, with some of your help, please …

I want to build a very simple VST audio plugin.
It should simply multiply the incoming Left channel audio by 0.25 and output it to LeftOutput, as well as, multiply the incoming Right channel audio by 0.5 and output it to RightOutput.
Two channels in, two channels out. No need for sliders or controls.

So, in the Projucer I start an ‘audio plugin’ project and via the IDE (Visual Studio 2017 on Windows) I finally get this on the PluginProcessor.cpp file:

// This is the place where you’d normally do the guts of your plugin’s
** // audio processing…**
** // Make sure to reset the state if your inner loop is processing**
** // the samples and the outer loop is handling the channels.**
** // Alternatively, you can process the samples with the channels**
** // interleaved by keeping the same state.**
** //for (int channel = 0; channel < totalNumInputChannels; ++channel)**
** //{**
** auto* channelData = buffer.getWritePointer (0);**
** // …do something to the data…**
** //}**

My question is how to change this code to get the requested output ?
How do I treat Left and Right, in and out in this code?

In that particular case you could just get rid of the for loops and just do the following :

buffer.applyGain(0, 0, buffer.getNumSamples(), 0.25f);
buffer.applyGain(1, 0, buffer.getNumSamples(), 0.5f);

But if you want to do it manually, something like this :

std::array<float,2> chanGains{0.25f,0.5f};
for (int channel = 0; channel < totalNumInputChannels; ++channel)
  auto* channelData = buffer.getWritePointer (channel);
  for (int i=0;i<buffer.getNumSamples();++i)

That does not do error handling if there are more than 2 channels used by the plugin, though.

Thanks. Superb.

Am I right in saying that the first version (buffer.applyGain) is more efficient than the manual one?

And if I add this code part (before the gain changes), I also get all other channel outputs ZERO’d. Correct (+ DeNormals protection)?

ScopedNoDenormals noDenormals;
_ auto totalNumInputChannels = getTotalNumInputChannels();_
_ auto totalNumOutputChannels = getTotalNumOutputChannels();_

_ // In case we have more outputs than inputs, this code clears any output_
_ // channels that didn’t contain input data, (because these aren’t_
_ // guaranteed to be empty - they may contain garbage)._
_ // This is here to avoid people getting screaming feedback_
_ // when they first compile a plugin, but obviously you don’t need to keep_
_ // this code if your algorithm always overwrites all the output channels._
_ for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)_
_ buffer.clear (i, 0, buffer.getNumSamples());_

Yeah, you should do the no denormals thing and zero the possible unused channels. There probably isn’t a noticeable difference in performance between using the applyGain method of AudioBuffer or applying the gain with your own loop but you could always benchmark to see if there’s a difference.


So, for instance, if I want to sum to mono:
Lout = (Rin + Lin) * 0.5;
Rout = Lout;

Is this the correct way of doing it (after zeroing output of other channels)?

channelData[0] = (channelData[1] + channelData[1]) * 0.5;
channelData[1] = channelData[0];

No, that’s not going to work. the channelData pointer you get from getWritePointer is for an array of a single channel’s audio data.

To sum into mono, something like this can be done :

    float** channelDatas = buffer.getArrayOfWritePointers();
	for (int i=0;i<buffer.getNumSamples();++i)
		float sample = 0.5f * (channelDatas[0][i] + channelDatas[1][i]);
		channelDatas[0][i] = sample;
		channelDatas[1][i] = sample;

Thanks. This works, though I do not understand how…

Is channelDatas the same pointer for both READ and WRITE buffer?
Is there any link or explanation to this topic? It seems to be critical to understand :wink:

It is the same pointer. The concept is called “in place processing”, since most of the time you don’t need the data afterwards. If your algorithm needs to keep the data, you have to allocate space for that in the prepareToPlay() method.

An IMHO more elegant way to write your code is:

const auto numSamples = buffer.getNumSamples();
buffer.addFrom (0, 0, buffer, 1, 0, numSamples);
buffer.applyGain (0, 0, numSamples, 0.5f);
buffer.copyFrom (1, 0, buffer, 0, 0, numSamples);

These methods have implementations to use SIMD instructions. However, IF the code is trivial in your loop, the optimiser might produce the same code anyway. But I like not to spell out the loops if I can avoid it.

Thanks, your answer is much appreciated.

So, what is the relation between this ‘buffer’ and the ‘audio buffer’ set up in the audio interface.
Say I change the ‘audio interface’ buffer from 512 to 1024 samples. How would that effect the buffer in my VST running in the local DAW?

Some DAW applications will use the buffer size that your audio hardware is set to. Some won’t. Some will change the buffer size between calls to processBlock. So you should not really make any assumptions about the buffer size you are asked to process.

1 Like

Hey, I have an issue that is related in concept to this issue.

I’m trying to get IIR filters to only process the left or the right channel in a stereo array. I’m trying to do a dual mono EQ. Could I use a structure written above to achieve this? Further, could I modify it, so that the program processed blocks of left and right, rather than individual samples?

Thanks for your help!