ADSR behaving oddly on polyphonic notes

I’m pretty new to JUCE but I am having trouble getting a simple ADSR thing to work so any help much appreciated.

I’m using JUCE v5.4.7 on a Mac in case that makes any difference.

I’m using pretty much the standard MPESynthesiser demo code - and have added a JUCE ADSR to the Voice class. This is set up with Attack and Decay of 1s, Sustain of 0.8 and Decay of 5s.

When I play one note the ADSR works as I’d expect. But if I play a second note while the first note is still decaying the volume of the first note gets massively reduced. I’m also getting a click when the second note is played - presumably due to this fast change in volume of the first note.

Here is what I hope is all the relevant code - I’ve commented out the LFO bit from the demo for now in the renderNextBlock:

void Voice::noteStarted()
    auto velocity = getCurrentlyPlayingNote().noteOnVelocity.asUnsignedFloat();
    auto freqHz = (float) getCurrentlyPlayingNote().getFrequencyInHertz();

    DBG("Start "+String(getCurrentlyPlayingNote().getFrequencyInHertz()));
    processorChain.get<osc1Index>().setFrequency (freqHz, true);
    processorChain.get<osc1Index>().setLevel (velocity);

    processorChain.get<osc2Index>().setFrequency (freqHz * 1.01f, true);
    processorChain.get<osc2Index>().setLevel (velocity);

void Voice::noteStopped (bool allowTailOff)
 void Voice::renderNextBlock (AudioBuffer<float>& outputBuffer, int startSample, int numSamples)
        auto output = tempBlock.getSubBlock (0, (size_t) numSamples);
        if(isActive() && amplitudeAdsr.isActive() == false)
            DBG("ADSR end");
        for (size_t pos = 0; pos < (size_t) numSamples;)
            auto max = jmin ((size_t) numSamples - pos, lfoUpdateCounter);
            auto block = output.getSubBlock (pos, max);

            juce::dsp::ProcessContextReplacing<float> context (block);
            processorChain.process (context);

            pos += max;
            lfoUpdateCounter -= max;
            if (lfoUpdateCounter == 0)
                lfoUpdateCounter = lfoUpdateRate;
                auto lfoOut = lfo.processSample (0.0f);
                auto curoffFreqHz = jmap (lfoOut, -1.0f, 1.0f, 100.0f, 2000.0f);
    //            auto& filter = processorChain.get<filterIndex>();
    //            filter.setCutoffFrequencyHz (curoffFreqHz);

        juce::dsp::AudioBlock<float> (outputBuffer)
            .getSubBlock ((size_t) startSample, (size_t) numSamples)
            .add (tempBlock);
        amplitudeAdsr.applyEnvelopeToBuffer(outputBuffer, startSample, numSamples);

I’m using the applyEnvelopeToBuffer in what I think is the right place - applying it to the whole buffer after the rest of the processing has been done.

But clearly something is not right somewhere - is there something obvious I’ve got wrong?

Edit: read below

You don’t want to apply your ADSR to the outputBuffer, that buffer is shared by all voices. You want to apply your adsr to your tempBuffer, then add that to the output.

Ah - OK that makes sense. I did not realise the outputBuffer was shared like that. Thanks for the super quick response.

Having had a look at the AudioBlock class - which is what the tempBlock is - I can’t see any way to get an AudioBuffer from it. I guess this makes sense given AudioBlock does not own any data and just provides references - but I’m struggling to see how I can use the ADSR
applyEnvelopeToBuffer() method on it? Is that just not possible I have to apply the ADSR a different way?

I would probably make a AudioBuffer tempBuffer and then get a temp audio block from that.

OK thanks - just wanted to check I’d not missed some obvious way to do this.

Many thanks @RolandMR - swapping the HeapBlock in the demo for a temp AudioBuffer and then pointing the temp Block to that fixed it.