JUCE Oversampling class not working, How to fix this?

I’m trying to implement oversampling to my wavetable synthesizer to get rid of the nasty aliasing. I looked at the official example src for the dspModule https://github.com/WeAreROLI/JUCE/blob/master/examples/Plugins/DSPModulePluginDemo.h and I tried implementing the juce oversampling class. But nothing changed, when I go above 22.05khz with a sinewave it still gets reflected back at nyquist.

Here’s my code,
initialization of oversampling class in PluginProcessor.cpp constructor:
mOversampling(2, 2, dsp::Oversampling<float>::filterHalfBandPolyphaseIIR, false)
PreparetoPlay:

mLastSampleRate = sampleRate;
mSynth.setCurrentPlaybackSampleRate(mLastSampleRate); 
mOversampling.initProcessing(static_cast<size_t>(samplesPerBlock));

This is my processBlock:

buffer.clear();
//synth
mSynth.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());
//Oversampling
dsp::AudioBlock<float> block(buffer);
dsp::AudioBlock<float> oversampledBlock;
setLatencySamples(roundToInt(mOversampling.getLatencyInSamples()));
oversampledBlock = mOversampling.processSamplesUp(block);
mOversampling.processSamplesDown(block);

Am I understanding the entire concept of oversampling wrong? Why is this not working?

This is not my area of expertise, but don’t you want to upsample, then do your rendering, then downsample? ie. do your rendering in the upsampled domain…

1 Like

True, I can’t figure out how to do that with the synthesizer’s rendernextblock though since it takes in a buffer not an audioblock. But shouldn’t either way be the same? Since if you upsample the synth and then apply a lowpassfilter at a higher samplerate, it should still filter out everything above 20khz?

Like I said, (dsp) not my area of expertise… I’m sure you will get the right answer soon. :slight_smile:

1 Like

I’m not sure if you really understood aliasing.
Aliasing occurs if something you do in your processing generates frequencies, that are higher than the highest frequency that can be recreated at the given sample rate. The highest frequency you can recreate at a sample rate is the so called nyquist frequency which is always half the sample rate. Now if you produce frequencies that are higher, they will occur as lower frequencies in your output signal, you can thing of the spectrum wrapping over from the high end to the low end.

So if you created an aliased signal, then this wrap over already happened. If you upsample the aliased signal and filter it with a low pass filter afterwards, the unwanted low frequencies won’t be removed by your low pass filter. Instead you do the upsampling before starting to process audio, giving whatever you do some more high-frequency headroom so that those high frequencies won’t wrap over. After this, you can safely cut away those frequencies that are not audible anyway by your low pass and then decimate the sample rate. As all high frequency content above nyquist was removed by the filter, you won’t encounter any aliasing anymore. Does this make things a bit clearer for you?

1 Like

Ohhhh okay, thank you so much for the explanation, everything’s clear now :slight_smile: So now I need to figure out how to play the synth in an audioblock instead of an audio buffer, do you happen to know a way to do that?