Voice steal pops

I’ve looked at the notes on noteOff for voices here:

And the logic behind voice stealing:

And then how the voice is stolen:

The issue I’m having is if I have a long release and play more note’s than voices, one of the voices will get stolen without even a few frames to fade out.

What I’m thinking this code could do is optionally allow users to return a small buffer of samples of the note fading out (just enough to stop a pop).

What do you guy’s think about that? And how would/should it be implemented? Or, perhaps, do you have a better method?

Perhaps one could just handle this by creating a dumpBuffer in the Synth class and letting voices dump to it in times of need… I’ll do that for now, but I really want to know how you guys have handled it.

1 Like

you need to write a “killQuick()” function in your envelope class that you can call in the stopNote() function when allowTailOff is false. about 3-5ms of fadeout is enough to sound smooth and instant. Anything quicker will pop, in my experience.


Is there a way to figure out how many samples I have left before it’ll get cut off?

Also, from the code, it looks like it should happen immediately.

write your own Synthesiser class that inherits from Synthesiser and override the default behavior for when notes are killed or stolen.

FWIW, a buffer size of 256 @ 44100 is 5.8ms. plenty of time for your voice to write its samples to the output buffer, mark itself as cleared and be free to be triggered for the next note.

1 Like

Relying on the buffer size is not a clean solution - chances are great that the message that triggers that kill is in the middle of the buffer (or even worse) so you might end up with only a few samples left and then there‘s your click again.


I haven’t looked at the code yet, but technically all noteOn and noteOff midi messages are available at the beginning for the full block, so it wouldn’t be a problem to handle the killing of notes first to let them write a fade out buffer, and have the notes free in the very same buffer. There is no parallel handling of events happening in the middle of a buffer.
But problematic indeed would be very short buffers, when you have only 16 samples time in extreme fast systems (I don’t know if they exist).


The juce::Synthesiser class chops up the buffer from the timestamps and then renders the voices for these subchunks, so this would require a rewrite of the entire class (and then the voice steal pops are the smallest problem you’ll encounter).

I am keeping a few voices more than the actual limit around and start killing voices with a 10ms fade once the “soft limit” is reached.

1 Like

One way to solve the problem is to allocate extra voices (eg. 2x amount) when the synth is created. Then there is internal voice limit after which new note ons start killing oldest voices. Only when the voices really run out, voice stealing is used. If the kill time of a voice is fast (couple of milliseconds), the load on the cpu is still light and notes fade out gracefully without clicks.


I came up with a pretty solid solution. All of my voices already have access to the synth that owns them, so I created an audio buffer in my synth for voices to emergency write to.

void SmoothWaveVoice::stopNote (float /*velocity*/, bool allowTailOff) {

  if (allowTailOff == false) {
    int numSamples = SmoothWaveSynth::FRAMES_TO_FADE;
    synth->voiceStolenFadeNumSamples = numSamples;
    renderNextBlock(synth->voiceStolenFade, synth->voiceStolenFadeCurrentSample, numSamples, true);

Then in my renderNextBlock function, I was already rendering the entire envelope at once, so I just grab the last value and then do a linear fade which will make up the difference. For example, if at the end of these samples the env gain would be 0.25, a linear fade to drop by that much is mixed in. But if the env gain would be 0 already, nothing is done. In this way, it doesn’t mess with my curved releases any more than is absolutely necessary.

void SmoothWaveVoice::renderNextBlock (AudioSampleBuffer& outputBuffer, int startSample, int numSamples, bool stolen) {
  float gainEnv[numSamples];
  int s = 0;
  for (; s < numSamples; s++) {
    if (gainAdsr.isActive() == false) {
      if (s == 0) { return; }
      numSamples = s;
    gainEnv[s] = gainAdsr.getNextSample();

  float gainLossRate = 0.0f;
  if (stolen) {
    gainLossRate = gainEnv[s-1]/numSamples;

  .... actual rendering code ...

The hardest part of this was that my voiceStolenFade AudioBuffer could not consistently be the size of any render phases outputBuffer, so I made a rotating buffer and manually copy the frames over… I don’t really like the following code, it feels too verbose. But it works for now.

void SmoothWaveSynth::renderNextBlock(  
  AudioBuffer< float > &    outputAudio,
  const MidiBuffer &    inputMidi,
  int   startSample,
  int   numSamples 
) {
  Synthesiser::renderNextBlock(outputAudio, inputMidi, startSample, numSamples);

  if (voiceStolenFadeNumSamples > 0) {
    for (auto i = outputAudio.getNumChannels(); --i >= 0;) {
      for (int s = 0; s < numSamples; s++) {
        int destSampleNum = startSample + s;
        int sourceSampleNum = (voiceStolenFadeCurrentSample + s) % FRAMES_TO_FADE;
          voiceStolenFade.getSample(i, sourceSampleNum)
    voiceStolenFadeNumSamples -= numSamples;
    if (voiceStolenFadeCurrentSample + numSamples > FRAMES_TO_FADE) {
      int tailSamples = FRAMES_TO_FADE - voiceStolenFadeCurrentSample;
      voiceStolenFade.clear(voiceStolenFadeCurrentSample, tailSamples);
      voiceStolenFade.clear(0, numSamples - tailSamples);
    } else {
      voiceStolenFade.clear(voiceStolenFadeCurrentSample, numSamples);
    voiceStolenFadeCurrentSample += numSamples;
    voiceStolenFadeCurrentSample %= FRAMES_TO_FADE;

Any pointers on any of this code would be welcome.

Also, I’m not sure where the problem lies in this, but this code still pops if I really hammer on some long release keys in AudioPluginHost, but when I use the vst in Bitwig, there’s no pops at all.

Do you guys know if AudioPluginHost is at fault, or is Bitwig just idiot proofing my audio?

I recently stumbled into the same problem while working on a little drum synth and I found this thread quite helpful.

I ended up with a solution that incorporates some of the ideas discussed here. It roughly consists of the following steps:

  • equip the envelope class with a special fastRelease() method - this basically releases the note very fast (a few ms) without click
  • add more voices than necessary (say 2 rather than 1)
  • override Synthesiser::noteOn so that every time a new note is played:
    • first, “kill” every playing voice (by “kill” I mean it call a method in the voice that in turn ends up calling fastRelease())
    • then, proceeds “as normal” (by calling the parent class implementation)

Something like this:

void noteOn(const int midiChannel,
		    const int midiNoteNumber,
		    const float velocity) override
    for (auto* voice : voices)
        if (midiChannel <= 0 || voice->isPlayingChannel (midiChannel))
            dynamic_cast<SynthVoice*> (voice)->callFastRelease();
    Synthesiser::noteOn (midiChannel, midiNoteNumber, velocity);

Seems to work quite well at the moment, but I still wonder what’s the “canonical” or best-practice solution?

Wouldn’t be the easiest way the following:

  • add an AudioBuffer<float> as a member of your voice
  • call it overlap* and resize it to hold maybe 10-50ms of audio
  • add an int called overlapIndex and set it to -1
  • when stopNote(...) is called with allowTailOff == false:
    • render** the next 10-50ms of your voice to overlap and apply a fade-out
    • set overlapIndex to 0
  • in every call of renderNextBlock(...):
    • check if (overlapIndex > -1)
    • if true, add the buffer contents to your outputBuffer and increase the overlapIndex
    • if overlapIndex == bufferSize -> overlapIndex = -1

If you additionally want to also take care of the case that a freshly stolen voice get’s stolen again, you should implement a ring buffer, but I guess that’s not really necessary unless Rachmaninov is using your synthesizer.

Note *: you may name your buffer differently
Note **: you even can call your renderNextBlock method! Just make sure to set overlapIndex to -1 before doing that :slight_smile:


@danielrudrich, I haven’t tried this yet, but I like the approach.

Maybe it’s a bit more work to do with the buffers compared to what I’ve done, but then there’s no redundant voices around, no extra methods on the envelope etc. - more “self-contained”.

I think I’ll give it a go, thanks for the suggestion!

If the voice that is stealing is of the same sound type, and it’s a subtractive synth, then just simply do not reset any of the phases and don’t reset the ‘volume’ of the ADSRs. Think of it all as continuous flow of voltages through a circuit, not a strict piece of robotic software. :slightly_smiling_face:

@DaveH is this pretty much equivalent to some kind of “legato” effect, right?

Yeah, I suppose it is. The buffer fade is probably going to give better results if you only have a two voices though, or you’re doing a sample player - where there is no coherence between the two sounds.

It’s a drum synth in this case. It doesn’t have different “pads” though - one instance of the plugin is one drum sound (monophonic).
Envelope is “one shot” - there is no sustain phase, once you press a key it will play the full drum sound.

So I guess when a note is stolen I want it to actually stop and trigger the new one (with its own envelope).

But yeah I think the mentioned approach makes sense for “traditional” synth sound, and indeed when there are a few voices involved.

Interesting to see how many different solutions there are for such a common issue :slight_smile:

If it’s the same sound then your ADSR maybe better suited as a retrigerring at the current level, after all a tom is still boinging when you hit it again quickly, (excuse all the technical jargon :slightly_smiling_face:) it doesn’t start from zero again?

1 Like

That’s an interesting idea, too.
Would be nice to have it as an additional feature that can be switched on/off for each sound I guess (so a user can chose between “hard” or “soft” retrigger depending on the type of drum sound).
Thanks for your input, I’ll keep this in mind!

This is pretty much the solution I used, but with the “overlap” buffer being held in the synth class instead. I can confirm it works quite well.

Also, the bit of having MyVoice::renderNextBlock know whether or not it’s got a deadline for fading out was important to my implementation because I stilled used my envelope release to see how close to faded out it would be by the end of the “overlap”. Then I just faded it the tiny amount it needed to hit zero by the end, or not at all if the envelope would make it end in time.

I was looking into this and wanted to note for any future searchers that the JUCE’s own synth tutorial calls clearCurrentNote() in renderNextBlock. https://docs.juce.com/master/tutorial_synth_using_midi_input.html

I’m following that strategy, tracking whether or not the note is “ending” but only clearing the note when my ADSR .isActive() state has changed to false:

        // release the voice after the adsr is finished
        if(noteEnding && !processorChain.get<adsrIndex>().isActive()){
            noteEnding = false;

In the case of voice stealing, I’m also setting the ADSR’s release to single digit ms in noteOff()

        // if this voice is being stolen, we still want a tiny bit of release
        if (!allowTailOff) {
            processorChain.get<adsrIndex>().setRelease(0.006f); // 6ms is enough for fade while new note feels "real time"
        noteEnding = true;