How to use JUCE oversampling?

JUCE has the oversampling class here:
https://docs.juce.com/master/classdsp_1_1Oversampling.html

I need to oversample my synth voices in an MPESynthesiser to fix a problem I am encountering with the filters in the voices. But I am unsure how or where to use the oversampling.

I am thinking the right place is here:

class MPESynthesiserInherited
	: public MPESynthesiser,
	public AudioProcessorValueTreeState::Listener
{
public:

void renderNextBlockCustom(AudioBuffer<float>& outputAudio,
		const MidiBuffer& inputMidi,
		int startSample,
		int numSamples)
	{
		MPESynthesiser::renderNextBlock(outputAudio, inputMidi, startSample, numSamples);

		//DISTORTION
		if (monoDistOnOff) {
			monoDistortion.renderNextBlockMonoStartSample(outputAudio, startSample, numSamples);
		}
	}

This is where Iā€™m rendering blocks of audio, and specifically I need the oversampling with: MPESynthesiser::renderNextBlock(outputAudio, inputMidi, startSample, numSamples);

So is that where Iā€™d want to use the oversampling class? If so what exactly do I need to do to run this line at say 4x oversampling?

I see renderNextBlock is taking AudioBuffer data, but the oversampling class wants AudioBlock data. Iā€™m not sure what that means or what to do about it.

Or is this not the right place to be doing this? I canā€™t think of anywhere else to put it, since my MPESynthesiserVoiceInherited class is all per sample based processing.

Any help? Thanks.

1 Like

Before rendering your synth you need to get a ā€˜up-sampledā€™ audio block firstā€¦ then do the render into it, then call processSamplesDown to return to the original block size (post filtering).
Remember that the synth samples rendering frequency needs to be divided by the oversampling rate, since it will get multiplied up on down-sampling.
In the case of rendering audio rather than processing incoming audio data, itā€™s really unnecessary to up sample an empty buffer. Up-sampling is usually done with calling ā€˜processSamplesUpā€™ in the oversampling class, but for my own purposes I added a new function in the JUCE oversampling class to just return an empty buffer (with size based on the oversampling rate) - can probably be optimised further but works and avoids extra processing of processSamplesUp function:

template
typename dsp::AudioBlock Oversampling::getUnprocessedUpsampleBlock (const dsp::AudioBlock& inputBlock) noexcept
{
jassert (! stages.isEmpty());

if (! isReady)
    return {};

auto audioBlock = inputBlock;

for (auto* stage : stages)
{
    audioBlock = stage->getProcessedSamples (audioBlock.getNumSamples() * stage->factor);
}

return audioBlock;

}

Before calling those functions for up/down sampling you need to set up the oversampling class instance . e.g: here setting up for 4 X oversampling

 oversampling.reset (new dsp::Oversampling<float> (numChannels, 2, dsp::Oversampling<float>::filterHalfBandFIREquiripple, false));

and the oversampling class instance needs to have settings initialised when the system calls prepare to play:

void prepareToPlay(int sampleRate, int samplesPerBlock, int channels)
{
    // Use this method as the place to do any pre-playback
    // initialisation that you need..
    dsp::ProcessSpec spec;
    spec.sampleRate = sampleRate;
    spec.maximumBlockSize = samplesPerBlock;
    spec.numChannels = channels;

    oversampling->reset();
    oversampling->initProcessing (static_cast<size_t> (samplesPerBlock));
}

In terms of conversion from audio buffer to dsp::AudioBlock - hereā€™s how I did it:
(where mTargetBuffer is defined as AudioBuffer& mTargetBuffer; (mWTsettings is a variable in my synth code that holds the target buffer pointers for generated sound insertion))

       numSamples = targetNumSamples * DSP_OVERSAMPLING_RATE;
       dsp::AudioBlock<float> targetBlock = dsp::AudioBlock<float>(mTargetBuffer);
       dsp::AudioBlock<float> blockOut = overSampling->getOverSampleBuffer(targetBlock);             
       mWTsettings.bufferL = blockOut.getChannelPointer(0);
       mWTsettings.bufferR = blockOut.getChannelPointer(1);
       commonSoundGen(numSamples);
       overSampling->downSample(targetBlock);

I created a simple helper/wrapper class to simplify things for myself (sorry I donā€™t know why the below code is not getting formatted into a single quoted block):

#ifndef DspOversampling_hpp
#define DspOversampling_hpp

#include ā€œā€¦/JuceLibraryCode/JuceHeader.hā€

#define DSP_OVERSAMPLING_RATE 4

using namespace dsp;

class DspOverSampling
{

public:

DspOverSampling(int numChannels)
{
    //setting a default rate of 4X oversampling (actual oversample rate = ratevalue X2)
 oversampling.reset (new dsp::Oversampling<float> (numChannels, 2, dsp::Oversampling<float>::filterHalfBandFIREquiripple, false));
}
~DspOverSampling()
{

}

void prepareToPlay(int sampleRate, int samplesPerBlock, int channels)
{
    // Use this method as the place to do any pre-playback
    // initialisation that you need..
    dsp::ProcessSpec spec;
    spec.sampleRate = sampleRate;
    spec.maximumBlockSize = samplesPerBlock;
    spec.numChannels = channels;

    oversampling->reset();
    oversampling->initProcessing (static_cast<size_t> (samplesPerBlock));
}



inline dsp::AudioBlock<float> getOverSampleBuffer(dsp::ProcessContextReplacing<float> context)
{
    return oversampling->getUnprocessedUpsampleBlock (context.getInputBlock()); // avoid any upsampling code... other than get audio block
}

inline void downSample(dsp::ProcessContextReplacing<float> context)
{
    oversampling->processSamplesDown (context.getOutputBlock());
}

private:

std::unique_ptr<dsp::Oversampling<float>> oversampling;

};

#endif /* DspOversampling_hpp */

1 Like

Maybe missed something in my ā€˜prepare to playā€™ function in terms of informing the dsp code to get the new sample-rate - Iā€™ll dig into that, but in the mean-time Iā€™d appreciate anybody pointing to what needs to be done there to correctly pass on ā€˜dsp::ProcessSpec specā€™. Thanks.

Trying to wrap my head around how to implement oversampling (I plan using it for a Synth plugin)

Am I wrong in the following structure of the processBlock?:

  1. fill the buffer with the sounds from the synthvoices - (nothing special going on in regard to oversampling)

  2. copy the contents of the buffer into a block

  3. call returnBlock = oversampling->processSamplesUp(block) ---- getting a returnBlock as return

  4. call oversampling->processSamplesDown(returnBlock) ---- (which both receives and returns the values in the returnBlock parameter)

  5. copy the resulting returnBlock into the buffer (for output)

I feel my understanding (above) is wrong as I am not getting the results I expected.

The synthesiser itself has to already run at the higher sample rate, if your purpose is to reduce aliasing with the oversampling. (Oversampling by the way isnā€™t necessarily the best way for aliasing reduction, you may need to use pretty high oversampling ratios to really reduce the aliasing.)

1 Like

Yes my purpose is to reduce aliasing.

Which other routes are ā€œbetterā€ (I know: thereā€™s always tradeoffs :slight_smile: ) possibly supported by the Juce framework.

P.S. my synth is using wavetables, but I donā€™t want multiple wavetables (for instance separate ones for specfic frequencies as in bandlimited wavetables) as per my synth spec (so thereā€™s already one tradeoff there )

Why is that? I mean the first call is to ā€œprocessSamplesUpā€ which to me means that the upsampling occurs inside the oversampling class?

If I myself upsample the synth (having it running at a higher SampleRate) can I then skip the processSampleUp? Wont calling it further upsample?

I can see from debugging that if you call processSamplesUp (with 2 times upsampling) with a block of 400 samples you get a block of 800 as return - isnā€™t that exactly ā€œupsamplingā€ ??

You canā€™t fix aliasing once it already happened. Once you have frequencies above sr/2 and they fold over the lower ones, thereā€™s nothing you can do to untangle the mixed up spectrum. You upsample before whatever makes the high tones, to make them fold over at a higher point. Say you have a distortion effect. To antialias it: upsample - filter to drop the mirror image - apply the distortion - filter again to drop the high tones produced by the distortion - downsample. If the high tones are produced not by an effect but by the tone generation itself, you can make your tones at a higher rate, filter (once) and downsample. But there are better ways to do that. You can use a bandlimited tone generator, which doesnā€™t make tones above Nyquist from the start. There are bandlimited methods for classical waveforms and for wavetables.

1 Like

But when does aliasing happen when we are entirely in the digital realm?
I can understand that it happens when converting analog to digital (= when sampling), but with a wavetable (of single-cycle waveforms) we are already in the digital realm.

Will performing anti-alias-filtering on the (digital) signal before sampling my single-cycle waveforms fix the aliasing? (thus bandlimiting the waveform Iā€™m sampling at that point).

Or does the aliasing stem from the interpolation of the single-cycle waveforms when using them at other (higher) frequencies? (So this ā€œrecalculationā€ of the waveforms is to be likened to sampling an analog source in regard to aliasing?)

And do I still have do some more anti-aliasing filtering after my synth then use those single-cycle waveforms for playback?

As you probably gather Iā€™m rather new to this, I dont quite understand whatā€™s going on :anguished:

Iā€™m not that versed on this either, Iā€™ve not implemented bandlimited wavetables. The aliasing comes from dropping samples of your cycle to read it at a faster rate -itā€™s a downsampling operation. Lowpassing the cycle before sampling and interpolating fixes this, but itā€™s too costly to do it in realtime. Whatā€™s usually done is to precalculate several filtered versions of the cycle (mipmaps), often one per octave, and crossfade between them. Some would run this at 2x and downsample at the end, some would make more mipmaps (at a closer interval) and skip the oversampling -you pick your compromise between quality, CPU and memory usage. There are different approaches to do this -for example, you can store the original cycle as its Fourier transform, then filter by zeroing bins and transform back. Here is some discussion that may help.

1 Like

It happens when you leave the digital realm, at DA conversion.

A perfect square wave in the digital realm is comprised of an infinite series of partials. So the direction of this signal extends beyond the Nyquist limit. As the DA process itself is a modulation function, the signal above Nyquist will get reflected back into the audible range (aliasing).

One way to avoid this is to generate a band limited signal in the first place using a technique like BLEP. Another way is to generate the signal at an oversampled rate and then downsample. The downsampling filters limit the bandwidth of the signal, though this is much more processor intensive than BLEP for example.

1 Like