Downsampling white noise

I am working on a synthesizer plugin which includes an option to oversample, and I’m noticing that white noise generated at the higher sample rate is quieter after its downsampled to the desired output sample rate.

And the higher the oversampling factor, the quieter the white noise after being downsampled. Each increase in the oversampling factor decreases the output volume by about -3dB.

For example at an oversampling factor of 1 (2x oversampling), the resulting white noise peaks at about 5dB. At a factor of 2 (4x oversampling) the noise peaks at about 2dB. At a factor of 3, -1dB. And at a factor of 4, -4dB.

Here’s a reduced example where I upsample an (empty) buffer, fill it with white noise, then downsample it:

#ifndef JucePlugin_PreferredChannelConfigurations
     : AudioProcessor (BusesProperties()
                     #if ! JucePlugin_IsMidiEffect
                      #if ! JucePlugin_IsSynth
                       .withInput  ("Input",  AudioChannelSet::stereo(), true)
                       .withOutput ("Output", AudioChannelSet::stereo(), true)
    oversampling.reset(new dsp::Oversampling<float> (2, 1, dsp::Oversampling<float>::filterHalfBandPolyphaseIIR, false));

    // This would result in noise that's -3dB quieter:
    //oversampling.reset(new dsp::Oversampling<float> (2, 2, dsp::Oversampling<float>::filterHalfBandPolyphaseIIR, false));

void OversamplednoiseAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
    oversampling->initProcessing(static_cast<size_t> (samplesPerBlock));

void OversamplednoiseAudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
    int numChannels = buffer.getNumChannels();
    int numSamples = buffer.getNumSamples();

    // Upsampling
    dsp::AudioBlock<float> inputBlock (buffer);
    dsp::AudioBlock<float> oversampledBlock = oversampling->processSamplesUp (inputBlock);

    // Read/write pointers to oversampled AudioBlock
    auto osL = oversampledBlock.getChannelPointer(0);
    auto osR = oversampledBlock.getChannelPointer(1);
    // Fill oversampled buffer with white noise
    for ( size_t i = 0; i < oversampledBlock.getNumSamples(); i++ ) {
      osL[i] = ((float)rand()/RAND_MAX) * 2.0f - 1.0f;
      osR[i] = osL[i];

    // Downsampling
    oversampling->processSamplesDown (inputBlock);

Is this just what happens when you downsample white noise?

What I would like is for the output to be a consistent volume regardless of the oversampling factor.

It’s normal, you’re dropping a lot of noise. Try multiplying by sqrt (sourceSR / targetSR).

1 Like

Thanks, that seems to work a treat! I was thinking I might need to bandlimit the noise or something…

I’d consider not oversampling the noise generation though. There’s not much benefit to it, unless you’re doing spectral shifts or something similar.

The noise generator is quite tightly integrated with the upsampled processing chain, so I’m not sure it’s practical to separate it out. There is also an oscillator that’s mixed with the noise, they’re both modulated by an LFO, and there are a couple of saturation stages as well. Oversampling is required due to the saturation and pitch LFO, which can introduce aliasing. (Well, not required, but I want to make it optionally available).

I see -makes sense then.