Gain ramps dependent on buffer size in prepareToPlay


This is a nitpick, and I’m not even using the routines, but in the process of analyzing some Juce audio code, it seems that the application of gain ramps (important, to prevent discontinuities in the output waveform caused by time-varying parameters) is sensitive to the buffer size.

Specifically, the duration of the ramp is sensitive to AudioDevice::getCurrentBufferSizeSamples() because the fade is always performed over numSamples in getNextAudioBlock. If the buffer is large the ramp will take longer and any user interface control tied to the gain will have some perceptible lag. For example, a DirectSound 2560 sample buffer at 44,100 equals a 5.8ms fade, but a 64 sample ASIO buffer at 44,100 is 1.4ms. Huge difference!

Just my opinion, but any object that exposes a gain feature, should also have a gainRampMilliseconds parameter or constant, and the number of samples over which the new gain is faded in would be calculated based on the sample rate and this value of gainRampMilliseconds. Depending on the buffer size and the duration of the gain ramp, it might be necessary for code to support ramping the gain across multiple calls to getNextAudioBlock().

Like I said, just a nitpick, and certainly not something that I am asking to be changed, but it was worth pointing out.


I totally agree - it’s only done that way as a quick-and-dirty click remover in AudioTransportSource. The “correct” implementation would have been quite a bit more complex, so I opted for a simple fix instead.


2560/44100 ~ 58 ms and 64/44100 ~ 1.45 ms . Just nitpickin’ :wink:

Seriously, a 64 sample ramp is quite good enough, so maybe min(64, sampleBufferSize) ?


And actually, where does it use a ramp that’s the same length as the buffer…? The only place I can see is “jmin (256, info.numSamples)” in AudioTransportSource.


AudioSourcePlayer::audioDeviceIOCallback I think


Oh, I see. Well, I’d say that in that case, it’s actually the best way to do it. If you’re continuously (and slowly) moving the volume control over a long period, then doing it like this, it’ll create a smooth overall slope. But if it was using a maximum ramp length, then when the buffer size is large you’d hear audible volume steps.


What I’m doing in my app is varying parameter changes over a fixed period of time, currently 5ms. So the audible result is consistent regardless of sampling rate or buffer size.


Sry for bringing this topic back, but I’m a newbie struggling to find a solution for applying a gain ramp that is independent from the buffer size as FL Studio has currently a bug that doesn`t allow me to use fixed size buffers (when using multiple outputs the mixer channel assignment gets messed up).

I tried to modify the gain plugin example with a LinearSmoothedValue and a LowpassSmoothedValue, but both attempts failed and I got aliasing when using them in combination with applyGain.

Maybe someone can point me in the right direction to make the gain plugin example aliasing free with a variable buffer size in mind?


As far as I can see, LinearSmoothedValue should work fine with a variable buffer size in the processBlock calls. How are you using it?


initialize smoother:

void prepareToPlay (double sampleRate, int samplesPerBlockExpected) override
	gainSmoother.reset(sampleRate, 0.002);

set new target value:

void AudioProcessorParameter::Listener::parameterValueChanged(int parameterIndex, float newValue)

apply gain with smoothed value:

void processBlock (AudioBuffer<float>& buffer, MidiBuffer&) override
	 buffer.applyGain(0, buffer.getNumSamples(), gainSmoother.getNextValue());


That just gets one value from the smoother and applies it to the whole buffer. You will need to write your own loop that calls getNextValue() and applies that to the samples in the buffer.

@jules Would it make sense to have a version of AudioBuffer::applyGain that took in a callable that is used to calculate the gain changes per sample? :slight_smile:

template<typename F>
void applyGain (int startSample, int numSamples, F gainfunc) noexcept


Thanks! That was the hint I needed. Sometimes my uptake is close to zero…
I think from around 30ms on the side effects are neglectable… but I wonder if this could be even faster and if I’m still doing sth. wrong?

    void processBlock (AudioBuffer<float>& buffer, MidiBuffer&) override
	for (int i = 0; i < buffer.getNumSamples(); ++i)
		auto newGain = gainSmoother.getNextValue();

		for (int channel = 0; channel < getTotalNumOutputChannels(); ++channel)
			buffer.getWritePointer(channel)[i] *= newGain;


If I recall right, the LinearSmoothedValue class is not thread safe. So you might need to add mutex locking (or some other thread safety measures) into your parameterValueChanged and processBlock methods.

On the other hand, very fast gain changes simply do cause audio artifacts, there isn’t really any easy way around that. (If you don’t do the gain value smoothing, you get zipper noise, but then with the smoothing very fast modulations of the gain will cause sidebands in the audio.)


A good test signal for this is a low-pitched sine wave, say 200-400Hz. This will reveal “thumpy” artifacts if your smoothing period is too short. I’m currently using 30ms smoothing which seems a reasonable middle ground.