class MyAudioProcessor : public juce::AudioProcessor
{
public:
MyAudioProcessor() {}
void prepareToPlay(double sampleRate, int samplesPerBlock) override
{
gainCompensator.prepare(sampleRate);
}
void processBlock(juce::AudioBuffer<float>& buffer, juce::MidiBuffer&) override
{
juce::AudioBuffer<float> inputBuffer(buffer);
// FX Processing
// Applying to buffer
gainCompensator.processBlock(inputBuffer, buffer);
}
private:
GainCompensator gainCompensator;
};
Smoothing 300–500 ms → no sharp jumps.
Logarithmic gain → perceived naturally.
RMS analysis instead of peak level.
No delay (works in a single audio block).
Well, spectrum analysis needs to be added if there is compression or equalization in your processing chain, but I would like to hear diverse opinions on this algorithm. Thank you.
I think this is a pretty tricky topic and there’s not an easy one fit for all solution. In your code there is already a few cases where I could imagine this sounding weird - putting something with a broad and changing spectrum (like drums) through a high-pass filter would cause large differences in level for bass drums, but not for high hats or cymbals, and so the gain compensator would be continuously fluctuating in a way that might sound like a weird compressor. Similarly putting a dynamic signal into a distortion would have frequent fluctuations. So having this work in a transparent way (if that is your goal) will be tricky. Smoothing helps, but it also makes the compensation sluggish to abrupt changes. But in some contexts this would be acceptable.
I think this is why more complex plugins have a button to gain match, where they average over longer periods, and then the user decides when to apply the compensation as a single value change to the output gain parameter. I know softube use this in some of their plugins, and NI’s ozone, and Bitwig has this in at least one of their modules. That’s just off the top of my head.
I’d also look into perceptual loudness and using EQs and filters to more accurately measure how we hear sound (things like K-weighting). There’s a few standards and opinions about this, but the EBU R 128 LUFS standard is pretty common, but there’s of course opinions about that. You already mentioned spectrum analysis, so you are maybe already aware of this stuff, or are at least on track to getting into it.
I think you’re off to a good start, but you’ve entered a rabbit hole and where you go from here probably heavily depends on the context in which you wish to use this.
this code is incorrect, the output buffer hasn’t been written yet so you don’t have the output RMS. Since the caller always sets the input buffer to the output buffer they’ll always be the same.
This assumes the buffer size is constant (it isn’t guaranteed to be), and you want to smooth the gain envelope at or near audio rate, not at the buffer size (which is going to be quite large).
You’re also assuming the same gain envelope can be applied to all channels. That’s not unreasonable but it’s worth acknowledging if that’s your intent.
The FFT looks interesting, which could be some special spectral processing you want. However, it might not be suitable for gain compensation, because
it is relatively heavy
it will delayed the analysis result, which means the gain compensation will also be delayed
Perhaps you could use several low/band/high pass filters to replace the FFT analysis here. However, if you do too much spectrum related gain compensation, you are going to
bring unexpected harmonics to the input signal
cancel out the original effect that you want
BTW, I would suggest a clipper at the end of the compensation to prevent it from blowing up the signal.
Here is an auto gain compensation that I use (I have ensured the size of incoming audio block is ~1ms):
Hey, thank you so much. I have added the following improvements:
Calculatin’ RMS of input and output after processing…
Implementin’ gatin’ to avoid reacting to very sharp signal changes.
Smooth the gain at a higher rate
Channel-specific gain adjustments (if necessary).
What do you think?
#include <cmath>
#include <juce_audio_basics/juce_audio_basics.h>
// Compute RMS for a given buffer
float computeRMS(const juce::AudioBuffer<float>& buffer)
{
float rms = 0.0f;
int numChannels = buffer.getNumChannels();
int numSamples = buffer.getNumSamples();
for (int ch = 0; ch < numChannels; ++ch)
{
const float* channelData = buffer.getReadPointer(ch);
for (int i = 0; i < numSamples; ++i)
{
rms += channelData[i] * channelData[i];
}
}
rms = std::sqrt(rms / (numChannels * numSamples));
return rms;
}
// Apply gain reduction with smoothing and gating
void applyGainWithCompensation(juce::AudioBuffer<float>& inputBuffer,
juce::AudioBuffer<float>& outputBuffer,
float sampleRate)
{
const int numChannels = inputBuffer.getNumChannels();
const int numSamples = inputBuffer.getNumSamples();
// Process the input and copy it to the output buffer for gain adjustment
for (int ch = 0; ch < numChannels; ++ch)
{
const float* inputData = inputBuffer.getReadPointer(ch);
float* outputData = outputBuffer.getWritePointer(ch);
// Apply some gain here (just as an example)
for (int i = 0; i < numSamples; ++i)
{
outputData[i] = inputData[i]; // Apply your gain or processing here
}
}
// Calculate RMS for input signal
float rmsIn = computeRMS(inputBuffer);
// Compute RMS for output signal (after processing)
float rmsOut = computeRMS(outputBuffer);
// Gain reduction: avoid divide by zero, use a small value if RMS is too low
float gainReduction = 20.0f * std::log10((rmsOut + 1e-8f) / (rmsIn + 1e-8f));
// Apply gain reduction
float gain = std::pow(10.0f, -gainReduction / 20.0f);
// Implement gating: if RMS of output signal is very low (noise floor), apply no gain reduction
float gateThreshold = 0.001f; // Adjustable threshold for gating
if (rmsOut < gateThreshold)
{
gain = 1.0f; // No gain change if signal is below the threshold
}
// Smooth the gain envelope at a higher rate (audio rate)
static float prevGain = 1.0f; // Store previous gain value for smoothing
float alpha = 0.05f; // Smoothing factor (controls how quickly gain responds)
float smoothedGain = alpha * prevGain + (1.0f - alpha) * gain;
prevGain = smoothedGain;
// Apply the smoothed gain to each channel
for (int ch = 0; ch < numChannels; ++ch)
{
float* outputData = outputBuffer.getWritePointer(ch);
for (int i = 0; i < numSamples; ++i)
{
outputData[i] *= smoothedGain;
}
}
}
I would never update the value of auto gain compensation. The equalizer plugin has one additional static output gain and one additional static gain compensation.
Avoiding Full RMS Calculation on Each Call (Cumulative RMS calculation could be done here for efficiency):
Because I calculate the RMS value per block and let juce::dsp::Gain smooth the result for me. If I choose cumulative RMS, the auto gain compensation will react extremely slow to changes in plugin parameters, which is not desired in my case.
You’re not persisting your processor state or calculating RMS consistently (using static globals, for example!). This looks like incorrectly generated code by an LLM.
I am currently trying to get as many options as possible. As I mentioned earlier, the code may contain errors and not work correctly since this is a draft, not actual working code.