Going from the VST SDK To JUCE

Hello.

I am new to JUCE and I picked it up because I like the GUI classes you have. But I can’t for the life of me figure out how to use your AudioSampleBuffer class. I’ve been trying to implement a simple delay just to get started and have failed miserably. How do I take this code from my VST SDK simple delay and implement with the AudioSampleBufferClass in the processBlock method?

[code]//Delay (single channel for simplicity)

void MyDelay::processReplacing(float **inputs, float **outputs, VstInt32 sampleFrames)
{
float *in1 = inputs[0];
float *out1 = outputs[0];

float delayedOutput;


while(--sampleFrames>=0)
{
	//channel 1
	delayedOutput = bufferLeft[index];
	bufferLeft[index] = (*in1) + (delayedOutput * feedback);
	(*out1) = ((1-blend) * (*in1)) + (delayedOutput * blend);
	
	index++;
	(*out1++);
	(*in1++);


	if(index >= delaySamples)
	{
		index = 0;
	}

}		

}[/code]

index is my pointer in my circular buffer. bufferLeft is my circular buffer.
blend and feedback are the float blend and feedback values from 0 to 1.
delaySamples is an int which is the number of samples I’m delaying by.

I’m not sure what you are stuck on.

AudioSampleBuffer::getSampleData()

gets you access to the raw samples, so existing code that just wants to work on arrays of floats can do so just fine.

I got that far I think. I don’t know how to write to the output stream (or if it’s even set up that way). Is the AudioSampleBuffer that’s passed into the processBlock method both the input and the output?

oh jeez. I just read the help file. I think I got it now. Sorry about that!

Yeah, the input is the output.

http://www.rawmaterialsoftware.com/juce/api/classVSTPluginInstance.html#b48afc8fb1960653b5a889b709207bb4

Hi there,

I’m trying to port an old VST plug to Au using JUCE and I’m stuck on the same issue, I don’t know how to write to the output stream and the link above is dead. could someone please help? My issue is almost identical to the OP above. Thanks

Juce uses the same buffer for in- and output, one way to use the above code in a Juce plugin would be:

//Delay (single channel for simplicity)

void MyDelay::processBlock(AudioSampleBuffer& buffer, MidiBuffer& midi)
{
   float* data = buffer.getSampleData(0); 
   int sampleFrames = buffer.getNumSamples();
   float delayedOutput;

   while(--sampleFrames>=0)
   {
      //channel 1
      delayedOutput = bufferLeft[index];

      bufferLeft[index] = (*data) + (delayedOutput*feedback)
      (*data) =  ((1-blend) * (*data)) + (delayedOutput * blend);
      index++;

      (*data++);

      if(index >= delaySamples)
      {
         index = 0;
      }
   }      
}

Although you get (imo) much cleaner code if you use a for loop and indices:

for (int i=0; i<sampleFrames; ++i)
{
      delayedOutput = bufferLeft[index];
      bufferLeft[index] = data[i] + (delayedOutput*feedback)
      data[i] =  ((1-blend) * data[i]) + (delayedOutput * blend);
[...]
}

You might also have a look at the JuceDemoPlugin.

Chris

Thanks Chris that’s very helpful,

That’s what I thought and I’ve been using the demo plugin as a reference, I just can’t get my plugin to work. I’ll keep trying and maybe I’ll post some code later and you might be able to spot the problem.

Cheers

Can anyone see why I’m getting silence with this code. I’m sure it’s something simple that I’m missing.

void ReverseDelayAudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
{
   const int sampleFrames = buffer.getNumSamples();
	int channel;
	
		 for(channel = 0; channel < getNumInputChannels(); channel++ )
		 {
			 
			 float* data = buffer.getSampleData (channel);
			 
			 for(int i=0;i<sampleFrames;i++) 
			 {
				 
				 delayBuffer[writePointer] = data[i];
				 ++writePointer;
				 //Check Bounds for wp
				 if(writePointer >= size) {writePointer = 0;}
				 //Apply amplitude envelope to the buffer using a cosine wave
				 (*data++) = (delayBuffer[readPointer1] * cosine[readPointer1]) 
                                 + (delayBuffer[readPointer2] * (1-cosine[readPointer2]));
				 //Play stored samples in Reverse
				 --readPointer1; 
				 //Check Bounds for rp1
				 if(readPointer1 < 0) 
				 readPointer1 = size-1; 
				 --readPointer2; 
				 //Check Bounds for rp2
				 if(readPointer2 < 0) 
				 readPointer2 = size-1;
			 }
		
		 }
}

First of all it’s not a good idea to mix indexing (data) with pointer arimethric (i). The first statement takes the ith offset from pointer data, while the second gives you the data at offset 0 and increases the position afterwards. If you increase your pointer in every loop you’re reading from data[0], data[2], data[4]… which might get you an access violation when you start reading data from outside your buffer. Don’t know what host you’re using but the host might decide that your plugin is no good due to the access violation resulting in the silence.

Also your code only works for mono data. With stereo data you will mix the channels since you fill your delay buffer with the alternating channel data.

Generally, this is something you can probably spot quite easily with a debugger. Simply set a breakpoint inside the loop and have a look at the data.

Chris

I’m an idiot, thanks Chris, that’s solved the major problem. I’ve been away from this kind of stuff for a while and it’s taking time to relearn my own algorithms! I hadn’t commented too good when I first developed this simple process.

Chris, is it this line of code in the jucePluginDemo that handles filling the delay before with alternating channel data for stereo? I’ve tried to set mine up like this and it works fine still in mono but I’m getting strange phasing and distortion when I use it on a stereo track in Logic 9.

I promise this is my last noob question, I realise it’s probably outside the scope of this forum but I’d really appreciate your insight

Thanks…

My updated code looks like this:

[code]void ReverseDelayAudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
{
const long int sampleFrames = buffer.getNumSamples();
int channel;
float delOut;

	 for(channel = 0; channel < getNumInputChannels(); channel++ )
	 {
		 
		 float* data = buffer.getSampleData (channel); 
		 float* delayData = delayBuffer.getSampleData (jmin (channel, delayBuffer.getNumChannels() - 1));
		
		 for(int i=0;i<sampleFrames;i++) 
		 {
			 
			 delOut = delayData[writePointer];
			 delayData[writePointer] = data[i];
			 ++writePointer;
			
			 //Check Bounds for wp
			 if(writePointer >= size) {writePointer = 0;}
			 
			 //Apply amplitude envelope to the buffer using a cosine wave
			 delOut = (delayData[readPointer1] * cosine[readPointer1]) + (delayData[readPointer2] * (1-cosine[readPointer2]));
			 data[i] = delOut;
			 
			 //Play stored samples in Reverse
			 --readPointer1; 
			 
			 //Check Bounds for rp1
			 if(readPointer1 < 0) 
			 readPointer1 = size-1; 
			 
			 
			 --readPointer2; 
			 
			 //Check Bounds for rp2
			 if(readPointer2 < 0) 
			 readPointer2 = size-1;
		 }
	
		}
		

// In case we have more outputs than inputs, we'll clear any output
// channels that didn't contain input data, (because these aren't
// guaranteed to be empty - they may contain garbage).
for (int i = getNumInputChannels(); i < getNumOutputChannels(); ++i)
{
		buffer.clear (i, 0, buffer.getNumSamples());
}

}[/code]

Does delayBuffer have two channels? If not you’re using the same buffer for both channels due to the jmin (channel, delayBuffer.getNumChannels() - 1).
Also, with stereo your readPointers jump from block to block because you’re increasing the same variable for both channels.

For example you could try this:

const int readPointer1Start = readPointer1;
const int readPointer2Start = readPointer2;
 
for(channel = 0; channel < getNumInputChannels(); channel++ )
{      
  float* data = buffer.getSampleData (channel);
  float* delayData = delayBuffer.getSampleData (jmin (channel, delayBuffer.getNumChannels() - 1));

  readPointer1 = readPointer1Start;
  readPointer2 = readPointer2Start;
         
  for(int i=0;i<sampleFrames;i++)
  {
    //[...]
  }
}

Chris