Clicks during attack stage of amp envelope (using JUCE's builtin ADSR class)

Hi all,

I’ve been looking at the ADSR class and I’m hoping somebody could help me out a bit here. I managed to reproduce my issue on the JUCE tutorials, so I am going to use that as a reference.

As a first experiment, I tried making use of ADSR in the Build a MIDI synthesiser tutorial and things work as expected there.
What I’ve done is basically to call noteOn() and noteOff() from noteStarted and noteStopped respectively and then just to add the following inside of my renderNextBlock:

// ...
auto currentAmpEnv = m_empEnv.getNextSample();
auto currentSample = (float) (std::sin (currentAngle) * (level * currentAmpEnv));
// ...

Now, as a next example, I try using ADSR on the Introduction to DSP tutorial, which is also a simple synth but uses the juce::dsp classes and also encourages to compute modulation not at each sample, but rather at each group/block of samples (whose size can be finetuned). Here I experience some audio glitches.

Following the style of the example, my code looks like this:

void renderNextBlock (AudioBuffer<float>& outputBuffer, int startSample, int numSamples) override
{
	// ...
	for (size_t pos = 0; pos < numSamples;)
	{
		// ...
		processorChain.process(context);
			
		pos += max;
		lfoUpdateCounter -= max;
			
		if (lfoUpdateCounter == 0)
		{
			lfoUpdateCounter = lfoUpdateRate;
			float ampEnvOut = m_ampEnv.getNextSample();
			processorChain.get<osc1Index>().setLevel(ampEnvOut);
		}
	}
	// ...
}

This gives me audible glitches in the very first milliseconds of the attack stage.
After several experiments I figured that the problem becomes much worst as I increase the value of lfoUpdateRate: say, if I keep it to 10 the problem is barely noticeable; with 100 it gets worst etc. If I set it to 1 it goes almost away i.e. there’s no more “distortion”, but still there’s a very small click at the very beginning of the attack (that is not there in the first “naive” example).

Now, my understanding is that in the first tutorial we go through each sample in the buffer and apply the envelope directly while in the second tutorial we do not apply modulation at each sample but rather at each block of samples, as defined by lfoUpdateRate; Setting lfoUpdateRate to 1 should make it equivalent to the first example.

There are some things I do not understand here:

  • why do I get those glitches at all in the second example? I am aware of the concepts of “audio rate” vs “control rate” and I think that indeed I do not want to compute the envelope for each sample, so setting lfoUpdateRate to 1 is not the solution. But what is happening here? Do I need perhaps some kind of “smoothing”?

  • when I set lfoUpdateRate to 1, it is my understanding that the two examples should be equivalent (i.e. modulation is computed and applied for each sample). But why in the second example there’s still a little click while there’s none in the first?

Thanks in advance and please do let me know if I need to post more info!

When you setup your processorChain and the Context, do you apply that to the outputBuffer?
The problem with that would be, that the outputBuffer contains already samples. You note that from the tutorial, when it uses addSample rather than setSample.

You will have to create a local buffer (allocated beforehand!) in each Voice, where you create the sample data starting from a clean buffer, call the replacing process and finally adding the buffer to the outputBuffer using AudioBuffer::addFrom().

2 Likes

Hey @daniel thanks for your reply!
If my understanding is right, my code (which is a simple modification of the Introduction to DSP tutorial’s code) is already doing what you suggest.

Here is the tutorial’s code for renderNextBlock:

void renderNextBlock (AudioBuffer<float>& outputBuffer, int startSample, int numSamples) override
{
    auto output = tempBlock.getSubBlock (0, (size_t) numSamples);
    output.clear();
    for (size_t pos = 0; pos < numSamples;)
    {
        auto max = jmin (static_cast<size_t> (numSamples - pos), lfoUpdateCounter);
        auto block = output.getSubBlock (pos, max);
        juce::dsp::ProcessContextReplacing<float> context (block);
        processorChain.process (context);
        pos += max;
        lfoUpdateCounter -= max;
        if (lfoUpdateCounter == 0)
        {
            lfoUpdateCounter = lfoUpdateRate;
            auto lfoOut = lfo.processSample (0.0f);                                // [5]
            auto curoffFreqHz = jmap (lfoOut, -1.0f, 1.0f, 100.0f, 2000.0f);       // [6]
            processorChain.get<filterIndex>().setCutoffFrequencyHz (curoffFreqHz); // [7]
        }
    }
    juce::dsp::AudioBlock<float> (outputBuffer)
        .getSubBlock ((size_t) startSample, (size_t) numSamples)
        .add (tempBlock);
}

and here is my version (replacing the LFO with an ADSR and keeping the rest unchanged):

    void renderNextBlock (AudioBuffer<float>& outputBuffer, int startSample, int numSamples) override
    {
		auto output = tempBlock.getSubBlock(0, (size_t)numSamples);
		output.clear();

		for (size_t pos = 0; pos < numSamples;)
		{
			auto max = jmin(static_cast<size_t> (numSamples - pos), lfoUpdateCounter);
			auto block = output.getSubBlock(pos, max);
			
			juce::dsp::ProcessContextReplacing<float> context (block);
			processorChain.process (context);

			pos += max;
			lfoUpdateCounter -= max;

			if (lfoUpdateCounter == 0)
			{
				lfoUpdateCounter = lfoUpdateRate;
				m_ampEnvOut = m_ampEnv.getNextSample();
				processorChain.get<osc1Index>().setLevel(m_ampEnvOut);
			}
		}

		juce::dsp::AudioBlock<float> (outputBuffer)
			.getSubBlock((size_t)startSample, (size_t)numSamples)
			.add (tempBlock);
    }

So if I understand correctly the sample data I create ends up in tempBlock and at the end is copied into outputBuffer with

		juce::dsp::AudioBlock<float> (outputBuffer)
			.getSubBlock((size_t)startSample, (size_t)numSamples)
			.add (tempBlock);

Am I missing something? Thanks!

Yes, that seems right to me. However, the processorChain looks a bit strange to me.

a) is the processorChain.process() call intentionally inside the loop? it seems you process samples more than once…
b) you probably want to call the setLevel before the process call…
c) what would be cool would be a ADSR gain processor… so you don’t have to call process sample by sample, but ATM, that seems to be the way to go…

Good luck

a) is the processorChain.process() call intentionally inside the loop? it seems you process samples more than once…

If my understanding is correct, then yes it should be inside the loop -
the idea is to fill in the buffer “piece by piece” where each piece is made of lfoUpdateCounter samples.

So, if for example we have lfoUpdateCounter = 100, then the actual buffer is split into “blocks” of 100 samples each; for each of those blocks we compute the envelope, apply it to the gainprocessor and then call process (so each of the 100 samples will share the same amplitude).

b) you probably want to call the setLevel before the process call…

Yes, that’s a good point! I rearranged the code to do that. It doesn’t solve the initial issue, but it’s still an improvement.

I’ve been investigating a bit more and got some new info:

  • In the initial description I said the problem only happens in the attack stage, but at a closer look that is not true. It seems to happen also in the decay phase. It becomes more and more noticeable as we increase lfoUpdateCounter (e.g. with 1000 is is very noticeable)

  • It is not only with ADSR! I tried using an LFO to modulate the amplitude and have same problem (clicks/pops, crackling)

I found that setting something like gain.setRampDurationSeconds(0.001) helps quite a lot. However, if I am a bit confused of what’s a good value to use: if I use something “too big” (e.g. 0.1) my sound lose the transient (I’m doing some drum synthesis), but if I use something too small (e.g. 0.0001) then I pretty much get the original problem. There seems to be a sweet spot somewhere but I think I’m missing some details. Is there any recommendation/rule of thumb?

Thank you!

OK, looks like I kinda figured out what’s going on and how to fix it:

  • the glitches may be due to the “low resolution” of the amplitude envelope, especially with large lfoUpdateCounter values. For example, if the sample rate is 44100 and lfoUpdateCounter = 100 than the amp envelope will have “sample rate” of of 44100 / 100 = 441 i.e. 441 points per second. Add this to the fact that I am using very short/fast envelopes and you get the audible glitches.

  • as mentioned before, it can be solved by smoothing the amplitude values, in my case using gain.setRampDurationSeconds(...) but I wasn’t sure about what “best” value to use there. After some experiments I came up with the following which seems to work just fine:

		auto modSamplePeriod   = 1 / (sampleRate / m_modulationUpdateRate);
		oscSectionProcessorChain.get<oscSectionOscIndex>().setRampDuration (modSamplePeriod);

Now there’s a little dilemma: basically I am trying to avoid computing an envelope point at each sample, so I do it for each group of sample instead; but then, because the resulting envelope is not smooth enough, I apply smoothing to the gain (which has a cost). Doesn’t it defeat the purpose a little bit?