Oversampling in a synthesizer

I'm still relatively new to C++ and JUCE, just coming back to working on my 4-op FM synth after several months off from it. I'm trying to implement oversampling so I can reduce aliasing; this is a new idea to me and my proof-of-concept implementation is very naive, but I notice no difference no matter how high I set the sampling rate. Any advice here?

void Wx100AudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
    synth.setCurrentPlaybackSampleRate(sampleRate * oversampling);
    downsamplingFilter.setCoefficients(IIRCoefficients::makeLowPass(sampleRate * oversampling, 22000.0));

void Wx100AudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
    int numSamples = buffer.getNumSamples();
    AudioSampleBuffer upsampledBuffer(1, numSamples * oversampling);
    keyboardState.processNextMidiBuffer(midiMessages, 0, numSamples, true);
    synth.renderNextBlock(upsampledBuffer, midiMessages, 0, numSamples * oversampling);
    downsamplingFilter.processSamples(upsampledBuffer.getWritePointer(0), numSamples * oversampling);
    for (int i = 0; i < numSamples; ++i)
        buffer.addSample(0, i, upsampledBuffer.getSample(0, i * oversampling));
    buffer.copyFrom(1, 0, buffer, 0, 0, numSamples);

In researching this, I came across the following quote from jules:

don't forget that juce has got a ResamplingAudioSource... if you used the Synthesiser base class you could hook one of those up in a Voice object and use that.

This sounds like the way to go, but I don't understand how to do it... All of the synthesizer stuff and plugin stuff appears to assume that you are using an AudioProcessor, not an AudioSource, and it's not clear to me how to connect one to the other. What am I missing?


Aliasing in synthesizers isn't the most friendly topic on DSP. I suggest you to have a look for KVRaudio.com DSP section forums, you will find a lot of information, and even links to JUCE synthesizer projects hosted on GitHub

buffer.addSample(0, i, upsampledBuffer.getSample(0, i * oversampling));

You are tossing (truncating) the oversampled information with this line. You need to somehow resample the bigger buffer into the smaller buffer - you can either write a simple linear interpolator for yourself or use the Lagrange Interpolator from JUCE for this task.

1 Like

You are tossing (truncating) the oversampled information with this line. You need to somehow resample the bigger buffer into the smaller buffer - you can either write a simple linear interpolator for yourself or use the Lagrange Interpolator from JUCE for this task.

Oh, when you put it that way, that makes perfect sense. I will give that a try. Thanks!

Actually, the low pass filter is doing that job. The approach of filtering then discarding samples is entirely legitimate.

However, a biquad LPF does not behave very well as a decimating filter. I use half band polyphase IIRs for down sampling, but they're probably too expensive for this application - this is why methods such as minBLEP have been developed.

If you want to continue with the over sampling approach, then consider down sampling in stages. Use a good filter for the first stage and then cheaper ones for the later stages.

BTW, I can send you some code which makes it easy to use Laurent de Soras' HIIR filters within JUCE. Send me a pm if interested.

Interesting. I tried my simple approach and the LagrangeInterpolator, and ended up with pretty much the same effect as I had with no oversampling. I don't have the expertise yet to put this into words, but I am thinking that my aliasing is an artifact of the phase modulation moreso than the high frequencies themselves? Otherwise I would be surprised if 32x oversampling wasn't enough to make a difference.

I'll send you a PM, Andrew, because I would like to see that. It may not help greatly with this project but it may with others that I have in mind.

Just an offside, but from what i heard and read, the original yamaha dx synths had terrible aliasing, and that was a big part of their character.

I made a dx7 emulator in pure data, and it didn’t sound right until i downsampled it, to actually increase the aliasing.

1 Like

That's true, but I'm not necessarily looking to clone the DX7 (or DX100, which I have, and which is what I have modeled my synth on). I am happy with the sound as it is but I wanted to see if I could clean it up.

Having a play with the DSP oversampling class (instead of my own naive implementation), doing something similar as the OP - which is why I came across this thread.
I see no way of avoiding the up-sampling other than defining a new filtertype/oversampling stage within the Oversampling class where for the ‘processSamplesUp’ I can replace it with the same from the dummy oversampling stage (creating empty data if I pass in an empty original sample rate buffer.).
When using this Juce dsp class, is there a better approach to this when up-sampling is not required (but access to the larger oversample buffer that will be used in downsampling is, in order to fill it with generated sound first) ?

(Side note - I’ve been developing with JUCE for nearly 2 years now - almost every time I get stumped I find a solution from looking at other posts in the forum - it’s a great community!)

1 Like