dsp::Oversampling magnitude

I noticed that the magnitude of the signal is increased in the oversampled buffer.
is that a bug or am I missing something?

here attached is a PIP plugin with the following process code that asserts when checking that the magnitude in the oversampled block is <= 1

OversamplingTest.h (6,2 Ko)

: AudioProcessor (BusesProperties().withInput  ("Input",  AudioChannelSet::stereo())
                  .withOutput ("Output", AudioChannelSet::stereo()))
    oversampling = new dsp::Oversampling<float> (2, 1,

void prepareToPlay (double, int samplesPerBlock) override
    oversampling->initProcessing ((size_t) samplesPerBlock);

void processBlock (AudioBuffer<float>& buffer, MidiBuffer&) override
    int numChannels = buffer.getNumChannels();
    int numSamples = buffer.getNumSamples();

    // overwrite the buffer with noise
    for (int i = 0; i < numChannels; ++i)
        auto* buf = buffer.getWritePointer (i);

        for (int j = 0; j < numSamples; ++j)
            buf[j] = random.nextFloat() * 2.f - 1.f;

    // check the magnitude in our buffer
        auto r = FloatVectorOperations::findMinAndMax (buffer.getReadPointer (0),
                                                       (int) buffer.getNumSamples());
        auto magnitude = jmax (r.getStart(), -r.getStart(), r.getEnd(), -r.getEnd());
        jassert (magnitude <= 1.f);

    // Upsampling
    dsp::AudioBlock<float> block (buffer);
    dsp::AudioBlock<float> oversampledBlock = oversampling->processSamplesUp (block);

    // check the magnitude in the oversampled block
        auto r = FloatVectorOperations::findMinAndMax (oversampledBlock.getChannelPointer (0),
                                                       (int) oversampledBlock.getNumSamples());
        auto magnitude = jmax (r.getStart(), -r.getStart(), r.getEnd(), -r.getEnd());
        jassert (magnitude <= 1.f); // ASSERTS !

    // Downsampling
    oversampling->processSamplesDown (block);

    // clear buffer

This code might just detect inter-sample peaks. For a proper check you’d need to calculate rms values or calculate the difference between input and output.

This can happen when oversampling. Imagine a sine wave being sampled right before and after its peak. The original magnitude is higher than each of the sample values. With oversampling you basically create a value in-between, which of course should be closer to the original peak.

I haven’t tested your code, but the difference between both magnitudes shouldn’t be large, maybe around 1-3 dB max?

1 Like

well, if i just added that :

if (magnitude > peak)
    peak = magnitude;
    DBG ("peak " << peak);

and got :

peak 1.45538
peak 1.63832
peak 1.70835
peak 1.7943
peak 1.80937
peak 1.82301
peak 1.82725
peak 1.83453
peak 1.84099

so that’s more ~5dB peaks

Still okay I’d say. Especially as you are testing it with white noise which has a good amount of high frequencies compared to natural audio/music signals which are kind of pink. The high frequencies cause the higher inter-sample-peaks as their slope is steeper.

Have you compared the peaks also with what you get after downsampling?

yes, that’s right.
after downsampling the peak gain is ok of course in my example.
But I was concerned about what can happen while processing the oversampled buffer:
if you have a non-linear processing going on, then the result can eventually be really different as the peak gain of the input signal varies.

And this difference is exactly why you’d want to do oversampling when doing non-linear dsp. The oversampled version is closer to what would happen with a continous signal. The non oversampled version introduces aliasing

as the peak gain of the input signal varies.

the true peak does not really vary between the two though.

thanks for the clarification guys