Help for a SamplerVoice based sample reader VST plugin



I started a drum sample reader VST plugin project for OSX based on Juce but I am struggling to find many informations (more examples in fact).

My goal for the moment would be to have let's say 8 samples, each assigned to one keyboard note and each with its own volume/balance knob (and filters and separate outputs later).

I managed to read samples using the synthesizer/samplervoice objects and to read several samples in parallel, assigning one sample to one midi note (one samplervoice instance per .wav file).
Now, I want to add a volume/gain, balance, filter knobs etc to each samplervoice.

(I know how to handle basic GUI elements, like sliders and pass values to the processor, thank to the Juce tuto.)

#Q1: is using samplervoice the right way to do that? (to make a sample based VST instrument I mean?)

#Q2, Gain: For the gain,  i tried to reuse the code in the Juce Demo Plugin ( but this did not have any effect on the volume when the sample is played. (I also compiled the audio plugin demo and the gain knob has also no effect either, as far as I can say...)

// Go through the incoming data, and apply our gain to it... 
for (channel = 0; channel < getNumInputChannels(); ++channel) buffer.applyGain (channel, 0, buffer.getNumSamples(), gain->getValue()); 

>Could can anyone confirm if the gain button works or not in the audioPluginDemo?
>if it works , why  would it work for a synthesizervoice but not using samplervoice ?
>is there  another way to change the volume?
>is there a way to change the velocity sensibility of a samplervoice ?

#Q3, Balance/pan:

I understand from other readings I would have to get each channel independently from the buffer (channel 0 for L and 1 for R ) and set the gain to each channel (with a different value) to change the balance.

>Could Can anyone confirm this or point me to an example about how to handle balance/pan?

#Q4 multiple outputs / routing:

I have read this tutoria (among others):

I understand I will have to deal with AudioProcessorGraph and multiple audioprocessors if I want to handle multiple outputs (like assigning sample track 2 to output 6 for example).

>Could anyone point me to  a simple example of how to use AudioProcessorGraph? (the Scumbler source did not helped me much :-( )
> or could anyone know where I can look at regarding routing?

Many thanks =)


I just compiled again the audio plugin demo from v3.1.1 and same result: the gain knob does not seem to have any effect.

(edit, 3h later:) I put the apply gain AFTER the renderNextBlock, it works!!!

    synth.renderNextBlock (buffer, midiMessages, 0, numSamples);
    // Go through the incoming data, and apply our gain to it...
    for (channel = 0; channel < getNumInputChannels(); ++channel)
        buffer.applyGain (channel, 0, buffer.getNumSamples(), gain->getValue());

    rms = buffer.getRMSLevel(0, 0,  numSamples);


Yes this is really a bit confusing. The gain knob changes the gain of the incoming audio data: the audio demo plug-in also takes audio from it's inputs and will mix it into the synth audio. The gain refers to the gain of this incoming audio. 


Hi Fabian, thanks for your answer.

Your explaination makes sense but then I don't understand how I can handle the volume for each sound separately with only one processor then... (but I understand an audio plugin can only have one processor at the same time, so it must be possible?).

I currently added two samples to the synth object, one assigned to the  note 36, the other one to note 37, using that code:

BigInteger notes;
notes.setRange (note, 1, true);
SamplerSound::Ptr sound = new SamplerSound (samplePath, *reader, notes, note, 0.0, 0.1, 60.0);
synth.addSound (sound);

How could I get the buffers generated by each SamplerSound separately in the processor to apply my processing? (is it even possible?)

(I understand the buffer is passed to the synth so it can put its audio data in it, right?)

I tried to apply the gain in the processor only when one particular midi note was played but it does not work well, it plays the note without processing and a small portion of the note processed, at the same time...

for (MidiBuffer::Iterator i (midiMessages); i.getNextEvent (m, time);)
        if (m.isNoteOn())
            info = (String) m.getNoteNumber();
            if(m.getNoteNumber() == 36){
                buffer.applyGain (1, 0, buffer.getNumSamples(), gainR);
                buffer.applyGain (0, 0, buffer.getNumSamples(), gainL);
        else if (m.isNoteOff())
        else if (m.isAftertouch())
        else if (m.isPitchWheel())

Would that be a solution to get the audio data for each SamplerSound using:

    synth.renderNextBlock (buffer, midiMessages, 0, numSamples);
    SamplerSound *sound = dynamic_cast<SamplerSound*>(synth.getSound(0));
    AudioSampleBuffer *AudioData = sound->getAudioData();
    AudioData->applyGain (1, 0, buffer.getNumSamples(), gainR);
    AudioData->applyGain (0, 0, buffer.getNumSamples(), gainL);

I tried different combination but get an instant crash as soon the applyGain is called...

Thank you so much for your help =)


What you want to do in general shouldn't be too hard and using JUCE Synthesizer class is definitely the right direction. My suggestion would be to derive a child-class from JUCE's SamplerVoice and simply override the startNote method and multiply the velocity with the note's gain. I haven't tested the code below but it should give you an idea of what I am talking about:

class CustomGainSamplerVoice  : public SamplerVoice
    CustomGainSamplerVoice(JuceDemoPluginAudioProcessor& myPluginProcessor)
        : parent (myPluginProcessor)

    void startNote (int midiNoteNumber, float velocity, SynthesiserSound* snd, int pitchWheel) override
        if (midiNoteNumber == 36)
            velocity = velocity * parent.gainOne;
        else if (midiNoteNumber == 37)
            velocity = velocity * parent.gainTwo;

        // call the base class' method
        SamplerVoice::startNote(midiNoteNumber, velocity, snd, pitchWheel);

    JuceDemoPluginAudioProcessor& parent;


Deriving SamplerVoice Class

Hi Fabian, thanks for the code snippet, that's clearly something I will need too =)

However, I understand I still need to get separately the buffers for each Synthesizer after calling renderNextBlock() if I want to apply distinct processing, like filters or other.

Could you confirm it is not possible to do that within the same AudioProcessor in processBlock()?

Assuming it is not possible, here is what I am trying to acheive (taking inspiration from the Scumbler tutorial):

-instanciate x AudioProcessors with 1 Synthesizer per AudioProcessor
-implement distinct processing in each processBlock()

I understand that:

-I need to declare each AudioProcessor as AudioProcessorGraph node
-I need to bind the AudioProcessorGraph with an AudioDeviceManager object using an AudioProcessorPlayer object

Is that accurate for a plugin?



You only need one Synthesiser. A Synthesiser can have as many voices as you like, add more with the addVoice() method.

To have unique processing per voice override the processNextBlock() in the SynthesiserVoice class.


Hi darkoliou bimbol,

I think you are over-complicating things. As Exo mentioned you only need one Synthesizer. To get more control on the voice apart than just the gain (like filters), you can also override the renderNextBlock method of SamplerVoice in the same way that I had overridden the startNote method in my code snippet above. However, instead of calling the base-class at function exit, you would typically call the base class'  renderNextBlock first and then apply any filters to the audio afterwards. Use the method getCurrentlyPlayingNote to find out which note is playing. Does this make sense?




Guys, very clear answer, thank you very much =)