This is a nitpick, and I’m not even using the routines, but in the process of analyzing some Juce audio code, it seems that the application of gain ramps (important, to prevent discontinuities in the output waveform caused by time-varying parameters) is sensitive to the buffer size.
Specifically, the duration of the ramp is sensitive to AudioDevice::getCurrentBufferSizeSamples() because the fade is always performed over numSamples in getNextAudioBlock. If the buffer is large the ramp will take longer and any user interface control tied to the gain will have some perceptible lag. For example, a DirectSound 2560 sample buffer at 44,100 equals a 5.8ms fade, but a 64 sample ASIO buffer at 44,100 is 1.4ms. Huge difference!
Just my opinion, but any object that exposes a gain feature, should also have a gainRampMilliseconds parameter or constant, and the number of samples over which the new gain is faded in would be calculated based on the sample rate and this value of gainRampMilliseconds. Depending on the buffer size and the duration of the gain ramp, it might be necessary for code to support ramping the gain across multiple calls to getNextAudioBlock().
Like I said, just a nitpick, and certainly not something that I am asking to be changed, but it was worth pointing out.
I totally agree - it’s only done that way as a quick-and-dirty click remover in AudioTransportSource. The “correct” implementation would have been quite a bit more complex, so I opted for a simple fix instead.
Oh, I see. Well, I’d say that in that case, it’s actually the best way to do it. If you’re continuously (and slowly) moving the volume control over a long period, then doing it like this, it’ll create a smooth overall slope. But if it was using a maximum ramp length, then when the buffer size is large you’d hear audible volume steps.
Sry for bringing this topic back, but I’m a newbie struggling to find a solution for applying a gain ramp that is independent from the buffer size as FL Studio has currently a bug that doesn`t allow me to use fixed size buffers (when using multiple outputs the mixer channel assignment gets messed up).
I tried to modify the gain plugin example with a LinearSmoothedValue and a LowpassSmoothedValue, but both attempts failed and I got aliasing when using them in combination with applyGain.
Maybe someone can point me in the right direction to make the gain plugin example aliasing free with a variable buffer size in mind?
Thanks! That was the hint I needed. Sometimes my uptake is close to zero…
I think from around 30ms on the side effects are neglectable… but I wonder if this could be even faster and if I’m still doing sth. wrong?
void processBlock (AudioBuffer<float>& buffer, MidiBuffer&) override
for (int i = 0; i < buffer.getNumSamples(); ++i)
auto newGain = gainSmoother.getNextValue();
for (int channel = 0; channel < getTotalNumOutputChannels(); ++channel)
buffer.getWritePointer(channel)[i] *= newGain;
If I recall right, the LinearSmoothedValue class is not thread safe. So you might need to add mutex locking (or some other thread safety measures) into your parameterValueChanged and processBlock methods.
On the other hand, very fast gain changes simply do cause audio artifacts, there isn’t really any easy way around that. (If you don’t do the gain value smoothing, you get zipper noise, but then with the smoothing very fast modulations of the gain will cause sidebands in the audio.)
A good test signal for this is a low-pitched sine wave, say 200-400Hz. This will reveal “thumpy” artifacts if your smoothing period is too short. I’m currently using 30ms smoothing which seems a reasonable middle ground.