Is this possible in a JUCE Synthesiser and if so how?

I would like to essentially do two simple things on a per sample basis:

  • Get the value of a given variable from each my voices collected into one place (ie. Probably in the MPESynthesiser object).
  • Do some math on that, and then put the result back into each of my voices.

The problem as I see it is I don’t think there is any place in the MPESynthesiser I can do this type of per sample based interactions with the voices. renderNextBlock works on an AudioSampleBuffer that is created from the voices, but once it’s a buffer, it’s too late to do the kind of per sample voice manipulation I want back and forth.

Is there some other way I can say for example, get a variable out of each voice, sum them all together, and then put them back in each the voices on a per sample basis?


Can you derive your own class from MPESynthesiser which overrides renderNextSubBlock to do sample-by-sample processing? I haven’t tried building the code below, but perhaps something like this could work.

class SampleBySampleVoice : public juce::MPESynthesiserVoice {
  float getSpecialVariable() const { return v_; }
  void setSpecialVariable(float v) { v_ = v; }

  float v_{};

class SampleBySampleSynth : public juce::MPESynthesiser {
  SampleBySampleSynth() {
    // set up voices, keep track of real voice type
    for (auto i = 0; i != 10; ++i) {
      auto v = std::make_unique<SampleBySampleVoice>();

  void renderNextSubBlock(juce::AudioBuffer<float> buffer, int start,
                          int num) override {
    render(buffer, start, num);

  void renderNextSubBlock(juce::AudioBuffer<double> buffer, int start,
                          int num) override {
    render(buffer, start, num);

  template <typename Sample>
  void render(juce::AudioBuffer<Sample> &buffer, int start, int num) {
    const juce::ScopedLock sl(voicesLock);

    for (auto i = 0; i != num; ++i) {
      const auto newValue = std::accumulate(
          derivedVoices_.begin(), derivedVoices_.end(), 0.0f,
          [](auto acc, auto v) { return acc + v->getSpecialVariable(); });

      for (auto v : derivedVoices_)

      const auto sampleIndex = start + i;

      for (auto v : derivedVoices_)
        if (v->isActive())
          v->renderNextBlock(buffer, sampleIndex, 1);

  std::vector<std::reference_wrapper<SampleBySampleVoice>> derivedVoices_;

Is there some other way I can say for example, get a variable out of each voice, sum them all together, and then put them back in each the voices on a per sample basis?

To have per sample calculations in the Synthesizer (not voices) you either implement your own voicing methods inside the Synthesizer (i.e not calling voices per blocks), or in your specific requirement you set a global variable in the Synthesizer that can be accessed by all voices via a pointer previously passed to them, and just read/add their value in it. I don’t know tho how reliable is this last method.

But why would you want to do that instead of doing a per sample in voices only, since everything you can do with that result in the MPESynthesiser will be at least in a per block basis?

Oh wow John that might work. I didn’t think of that. The question is how does JUCE render the voices: Does it do them all synchronously? If they are all rendered simultaneously as each block is rendered this would work in principle. If each voice is rendered in blocks separately one after another it won’t work (ie. for a given block, if voice 1 renders start to finish, then voice 2 renders start to finish, etc. it won’t work).

I’m still not good at all this pointer reference business, so can I run it through and you tell me if it makes sense? In theory, is this how I would do it?

I could create variables inside my PluginProcessor.cpp private section (or does it need to be the public section) like:

float voice1Var = 0.f;
float voice2Var = 0.f;
float voice3Var = 0.f;
float voice4Var = 0.f;
float voice5Var = 0.f;
float voice6Var = 0.f;

Then where the voices are created in PluginProcessor.cpp, I can pass in a reference to them in addition to my parameters which are already going in to them and the voice number that each voice represents (so I can keep track of which var to manipulate per voice):

for (int i = 0; i < numVoices; i++) {
mMpeSynth.addVoice(new MPESynthesiserVoiceInherited(&parameters, &voice1var, &voice2var, &voice3var, &voice4var, &voice5var, &voice6var, i+1));

Then in each voice I can have the following under private:

float* voice1VarPtr;
float* voice2VarPtr;
float* voice3VarPtr;
float* voice4VarPtr;
float* voice5VarPtr;
float* voice6VarPtr;
int voiceNumber;

And my class constructor would be:

class MPESynthesiserVoiceInherited
	: public MPESynthesiserVoice,
	public AudioProcessorValueTreeState::Listener
	MPESynthesiserVoiceInherited(AudioProcessorValueTreeState* parameters, float* voice1Var, float* voice2Var, float* voice3Var, float* voice4Var, float* voice5Var, float* voice6var, int voiceNumberIn)

	parametersPointer = parameters;
	voice1VarPtr = voice1Var;
    	voice2VarPtr = voice2Var;
    	voice3VarPtr = voice3Var;
    	voice4VarPtr = voice4Var;
    	voice5VarPtr = voice5Var;
    	voice6VarPtr = voice6Var;
    	voiceNumber = voiceNumberIn;

Then I would be able to inside each voice allocate whatever value I want for each voice to its respective float variable, and all the voices could access the values for all the other voices simultaneously. So I could do the math in each voice.

I just realized after writing this up it would make more sense to do it with an array of floats rather than individual float variables. But either way the principle would be sound.

Does that make sense and is that what you were suggesting? It makes sense that it should work in principle since that’s how the parameters are getting into the voices.

The only issue is like I said that if each voice is rendered as a separate block, then maybe they won’t be synchronized. Eg. If it renders:

Voice1 block start to finish, then voice 2 block start to finish, then voice 3 block start to finish, then maybe this won’t coordinate. I’m not sure. Any thoughts?

What do you think? Thanks for your help. This would be really nice to work out a solution for that doesn’t require me rebuilding the whole JUCE synthesiser framework.

[I edited this post thinking it was a new answer, and honestly don’t remember what I was saying here]

Thanks John. I’ve looked and looked previously and now and I still can’t seem to figure out how these voices are summed. From what I can see I think they are processed individually and asynchronously but I can’t see where the voices are summed so I am not sure.

The renderNextBlock comes from:

And the general method of the MPESynthesiser is explained here:

I have built a guitar modeling synthesiser. One of the things that occurs in a guitar is coupling between the different elements. That means when one element is excited and makes noise, some of that noise will excite other elements and transfer through the instrument. So for example, if you pluck one string, some of the vibration will go through into resonating the body, and some will go across into the other strings.

If each voice is a “string”, I would like to be able to take some output from each and input it into the others on a per sample basis to simulate this effect.

But in order to do that I’d need all my voices to be processed simultaneously sample by sample and I’m not sure if that’s possible. Otherwise it sounds like I have to rewrite some JUCE code. I could always hire some help with that if that’s the only solution as that’s a bit over my head still at this point.

Thanks for any further thoughts. I can always make a second thread to ask about the voice processing to clarify that.

Okay that looks like a legit reason to go that way. In that case I think the most straight forward method you can go for is just making an Audio Processor, implementing a class called String or Voice which has a tick() or getOutput() function that outputs 1 sample only.
I don’t know the coupling calculation part: if you will use the output of one voice for doing some kind of calculations inside the other voices processing, or if it’s just an addition but if it’s the former part the tricky part will be the loops (when does the loop that says one string that excites the other, which at the same time excites the first one, end?).

If you have that part solved doing the voice managing should be pretty easy: look into JUCE/modules/audio_basics/mpe/juce_MPESynthesizer.cpp, there you will find the methods for managing voices: findFreeVoice and findVoiceToSteal are the ones that interest you (but of course you can take all the rest to skip some work).

Well I tried the method I posted above which was:

  • Storing variables in pluginProcessor.cpp which could represent the per sample outputs of each voice.
  • Passing them in by reference to the voices, so each voice can access all the other voice’s outputs.

The problem is that the standard MPESynthesiser renders each voice for a block or chunk in sequence. ie. It will render x number of samples for voice 1, then it will render x number of samples for voice 2, etc.

So there is no way for each voice to have access to the output of each other voice on a per sample basis so I can feedback their outputs into each other.

Does this then mean the solution is to next rewrite the synthesiser rendering to change it to only render each voice 1 sample at a time? ie. One sample of voice 1, then one sample of voice 2, then one sample of voice 3, etc.

If so, how might I go about doing this? Which function would I need to rewrite for an MPESynthesiser? Is it the renderNextSubBlock like reuk suggested above? I don’t really understand that function as it is.

It is utilized in MPESynthesiserBase.cpp as:

void MPESynthesiserBase::renderNextBlock (AudioBuffer<floatType>& outputAudio,
                                          const MidiBuffer& inputMidi,
                                          int startSample,
                                          int numSamples)
    // you must set the sample rate before using this!
    jassert (sampleRate != 0);

    MidiBuffer::Iterator midiIterator (inputMidi);
    midiIterator.setNextSamplePosition (startSample);

    bool firstEvent = true;
    int midiEventPos;
    MidiMessage m;

    const ScopedLock sl (noteStateLock);

    while (numSamples > 0)
        if (! midiIterator.getNextEvent (m, midiEventPos))
            renderNextSubBlock (outputAudio, startSample, numSamples);

        auto samplesToNextMidiMessage = midiEventPos - startSample;

        if (samplesToNextMidiMessage >= numSamples)
            renderNextSubBlock (outputAudio, startSample, numSamples);
            handleMidiEvent (m);

        if (samplesToNextMidiMessage < ((firstEvent && ! subBlockSubdivisionIsStrict) ? 1 : minimumSubBlockSize))
            handleMidiEvent (m);

        firstEvent = false;

        renderNextSubBlock (outputAudio, startSample, samplesToNextMidiMessage);
        handleMidiEvent (m);
        startSample += samplesToNextMidiMessage;
        numSamples  -= samplesToNextMidiMessage;

    while (midiIterator.getNextEvent (m, midiEventPos))
        handleMidiEvent (m);

However, the actual function renderNextSubBlock is just:

/** Implement this method to render your audio inside.
        @see renderNextBlock
    virtual void renderNextSubBlock (AudioBuffer<float>& outputAudio,
                                     int startSample,
                                     int numSamples) = 0;

What does that mean? Where is the actual default code for renderNextSubBlock so I can see what it’s actually doing normally?

Thanks for any further ideas.

There’s a standard implementation in the MPESynthesiser class. It just loops through all the voices in turn and calls renderNextBlock on them. To get sample-by-sample processing, you’d need to

  • Add an outer loop to step through each sample in the block
  • For each sample in the block, call renderNextBlock on each voice, with a blocksize of 1 (essentially just rendering the ‘current’ sample).

That’s pretty much what I’ve done in the code above…

Thanks so much @reuk and @johngalt91!

I got it working. Sounds awesome.

I ended up making a vector in pluginProcessor which is passed in by reference to my voices. Each entry in the vector would be the output for one voice.

Then I implemented the sample by sample processing solution you suggested reuk (with modifications to fit my structure), so at each sample, the vector is updated, and each voice takes what it needs to calculate the bleed from the other voices into it.

Quick final question, reuk - I’m not used to seeing other people’s writing. You wrote:

    for (auto i = 0; i != num; ++i) {

Is there any reason for doing this? I would normally do:

    for (int i = 0; i < num; ++i) {

I think these give the same outcome, right? Is there any difference ie. in efficiency or anything?

Thanks again.